The fifteenth annual TypeCon
conference was held in Portland,
Oregon on August 21–25, featuring a mix of session topics that
encompassed type design, letterpress printing, and modern font
software. Open source and open font licensing have become hot topics
in recent years—largely due to the rise of CSS web
fonts. But the signs are that open source is gaining even broader
acceptance, as evidenced by Paul Hunt and Miguel Sousa's presentation
on the Adobe Source Sans Pro and Source Code Pro fonts—and the
ripple effects they triggered within the company.
Hunt and Sousa are both part of Adobe's Type Team, which
produces both software tools and typefaces. Historically, both
categories of work have been proprietary, which is part of what made
the 2012 debut
of its open font families significant. The pair spoke about the development
of the project, from the company's internal motivations to how
interacting with the community affected the team's workflow and
technical decisions.
After giving a general overview of open source concepts and
development processes, Hunt discussed the decision to create an open
font in the first place. The company is a type foundry (selling its
own fonts), and it runs the web font subscription service TypeKit.
But, as Hunt explained it, Adobe created the Source Sans Pro font
because the company has been developing more open source software in
recent years, and it needed fonts that it could incorporate into its
releases. In fact, he said, Source Sans Pro was first used in Strobe Media
Playback, an open source web video-player project. Since then,
the fonts have also been central to Adobe's Brackets, an open source, browser-based
HTML editor, and have been deployed in several other projects.
Initially, Hunt said, the company had considered taking one of its
existing commercial typefaces and releasing it under open source
terms, but the team eventually decided to create something new.
Source Sans Pro was inspired by Morris Fuller Benton's venerable News Gothic and
several of its contemporaries, he
said, and was optimized to be used in software user interfaces, while
still being readable in long blocks on continuous text. The initial
release was made in early August 2012, in 12 styles across six
weights. In September, a second typeface was added: Source
Code Pro, which is a monospaced font family designed for use in
coding.
Tooling and retooling
As Sousa explained, the fonts were developed in the (proprietary)
FontLab Studio font editor, using Adobe's Multiple
Master format (which greatly simplifies adding new font weights),
and built with the help of Adobe Font
Development Kit for OpenType (AFDKO), which is a suite of scripts
and command-line utilities. Internally, the team managed the font
sources with the company installation of the Perforce revision control
system.
The team initially released the fonts on its SourceForge.net page,
providing TrueType and OpenType binaries plus the FontLab Studio
source files, together in one downloadable .ZIP archive. The fonts
were also made available through a number of font services, including
TypeKit and Google Fonts. The reaction was
overwhelmingly positive; Hunt commented that the release announcement
remains the most-viewed post on the team's blog to this day, and there
were a number of stories about the project in the general tech press.
Users even began reporting bugs quite early on in the process. But
there were other comments, too—most notably criticism of the
location and structure of the releases themselves. Or, as Hunt put
it, there were comments of the form "you fools; open source
development takes place on GitHub." But that characterization was
tongue-in-cheek; such comments were not reprimands, but
encouragements. One cited the benefits of moving to GitHub:
attracting new users who would report bugs, public forks that tested
outside contributions, and the ability to accept or deny pull
requests from outsiders.
Those benefits evidently sounded good to the team, because it did
indeed migrate from .ZIP file releases on SourceForge.net to a fully
hosted repository on GitHub. It took a bit of time to get used to
Git's syntax and its differences from Perforce, but the team soon
found Git more useful. In addition, because FontLab Studio's native
source file format is closed and binary, the team also switched over
to using the XML-based Unified Font Object (UFO) format. The
text-based nature of UFO worked better with Git, but it also made the
sources editable by a wide variety of applications and third-party
scripts.
In short order, the team discovered several side benefits to the
new workflow. Not only was UFO's format better suited to GitHub's
revision control, but its human-readability made visual inspection of
changes simple. Likewise, although the team was already well versed
in version control, Hunt said that the distributed nature of Git made
collaboration much easier—between internal team members as well
as with the community. Reports also came in that seeing the document
hierarchy in public proved educational for users and contributors,
both for how the fonts were designed and for how AFDKO is used.
Contributions
Hunt and Sousa then discussed several types of feedback they had
seen from the public project. There were bug reports—including
requests for additional character sets—changes to glyphs, comments, and
everything down to simple "+1"s, which Hunt told the audience were the
least helpful type of feedback. Hunt said that the team had
anticipated a significant number of design contributions, based on
what the community had said about moving to GitHub, but in reality
there have been very few. He chalked this up to a number of factors,
starting with the small number of people working in type design, but
also including the difficulty of rebuilding and testing fonts. He
also speculated that there might be a lot of people who would be
interested in contributing but do not know where to begin, and
encouraged the open source font community to help educate people.
But there have been other forms of contribution, he said. Logos
Bible Software commissioned
the addition of small capitals, subscripts, and superscripts to Source
Sans Pro. There have also been significantly more bug reports and
feedback comments on the open source typefaces than on Adobe's
commercial offerings, which Hunt said has led to improvements in the
commercial products as well. But perhaps the most fundamental change
to come from the project was the rethinking of the font development
workflow and toolchain, he said. The team has become more transparent
and collaborative in all of its projects, both internal and external. And it has continued to use Git
for version control, both on its open source and its proprietary
projects.
Sousa and Hunt ended the talk by encouraging everyone to contribute
to the fonts. "Although these were started by Adobe, they belong to
the community." In the Q&A period, an audience member asked what
contributions would be the best to work on; Hunt replied that anything
was welcome—there is a roadmap, including several new character
sets, but all improvements benefit everyone. Indeed, in one
particularly amusing example (perhaps because it incorporated a
considerable amount of ASCII art), fellow Adobe employee Frank Grießhammer
set out to add the Unicode box-drawing
characters to the fonts, an effort about which he delivered his
own rather detailed talk later in the conference.
Another audience member asked what the Adobe open font project had
learned that had not already been proven by Google Fonts. Hunt
responded that the primary drawback of Google Fonts is that it only
provides downloadable products. One can get the source
files, but the service is not set up to contribute back or
collaborate. The same audience member also asked if the team's
experience with open source development meant that AFDKO would be
released as open source. Sousa replied that they were working toward
that goal, but that they still had more internal work to do before an
open source release could be made. However, he also commented that
AFDKO
is free to download and use, and hoped that people would not let the
EULA be an excuse to not get involved.
To longtime free software supporters, some of the Adobe Type Team's
observations about open source development will come as no surprise.
Nevertheless, it was still refreshing to see that the company was
willing to listen to the feedback of community members, even in the
project's earliest days and even when that feedback recommended
a nearly complete overhaul of the project's tool set and workflow.
After all, one does not need to look very hard to find examples of
proprietary software vendors announcing that they will do open source
their way whether the community likes it or not. Those companies
usually reap frustration and disappointment, whereas the Adobe Type
Team not only found a good user community, but several beneficial
changes it can incorporate into its other projects as well.
Comments (21 posted)
Outline fonts historically define their constituent glyphs with
just two "colors": the foreground color inside the character's
contours, and the background color that lies everywhere else. On a
generic document, those colors usually equate to black and white,
although both can be assigned to other colors in virtually every
application. But this situation is misleading; when applied to
digital fonts, the term historically only takes one back a
few decades. Typefaces in other media have existed for centuries,
and were never restricted to black and white. So it should come as no
surprise that there is a push to adapt digital font formats to
natively support
multiple colors. At TypeCon 2013 in Portland, there were in fact two
competing proposals under discussion: one advocated by Microsoft, and
one backed by Adobe and Mozilla.
In graphic design circles, the term "chromatic fonts" is used to
describe fonts built to support multiple colors. Today, chromatic
fonts are generally incorporated into designs by taking a set of fonts with
complementary shapes and layering them in software, one color per layer. A good example
would be the HWT American
Chromatic system from Hamilton Wood Type, a digitized font family
based on 19th Century woodblocks. The different layers would be
combined in layout software, such as Adobe InDesign, Photoshop, or
Inkscape. For the web, there are other approaches, such as
Colorfont.js, which we looked at in 2012. But even with
software support, layering and aligning blocks of text in multiple
fonts is tedious and error-prone. It would be simpler if the font
could support regions of different colors out of the box.
There is, however, a use case for multi-color
glyph support in fonts that is completely unrelated to multi-color
type as graphic designers use, which is what leads to the two competing
proposals. The second use case is emoji, the
character-sized pictographs popular in text messages and online chat
applications. While emoji are relatively popular everywhere, as
Microsoft's Michelle Perham explained in her TypeCon talk, they are a
far bigger deal in Japan, where they originated with mobile phone
operator NTT DoCoMo. DoCoMo and its two largest competitors in Japan,
KDDI and SoftBank, standardized on a common set of emoji in the
mid-2000s.
While the standard was originally only used in Japan, it was
accepted into the Unicode 6.0 specification in 2010, incorporating 722
symbols. The standard emoji set has since spread to many other mobile
and Internet platforms, with one of the most notable being Apple's
iOS. iOS 5 was the first Apple release to support emoji in every
region of the globe, but iOS 6 took things a step further in late 2012
by rendering the emoji glyphs in multiple colors. As would be
expected, this proved to be a big hit among the demographics that care
about emoji, and, in June, Microsoft announced at one of its developer
events that Windows 8.1 would
feature color emoji, too.
My emoji
However, while Apple's color emoji implementation was a custom
iOS-only solution, Perham said, Microsoft decided instead to implement
its color support by adding a set of extensions to the OpenType font
format. OpenType already supports a large set of extensions through
the use of "feature" tags, which are text tables that usually describe
character-matching expressions, substitution rules, spacing
adjustments, and so on.
Microsoft's new tags operate a bit differently from the most common
OpenType tags. The first is called colr (following the
convention of four-letter-or-less tag names). The base emoji glyph is a
black-and-white outline character as it is in existing fonts; a
colr table is attached to it, which contains a list of
additional glyphs, one for each color used, along with an indexed
color value for each glyph (these indexes are mapped to colors in
another structure, discussed below). The list also indicates the z-order in
which the color glyphs are meant to be
stacked. If the device or application wants to display the color
version of the character, it just looks up the components in the
colr table and composites them together. If it does not care
about color, it can use the black-and-white base glyph like normal.
The actual color that each of the components should be rendered in
is listed in the second addition to OpenType, the cpal (for
"color palette") table. The cpal table encodes a color
palette as a list of RGBA values. The table must include one such
palette (palette number 0), but it can offer alternate
palettes as well, leaving it up to the software how best to expose the
palette options to the user. Perham pointed out that Apple's color
emoji font took a great deal of criticism by rendering all of the
human face emoji in decidedly Caucasian-looking colors; supporting
multiple palettes offers a relatively simple and space-efficient fix.
At the moment, Perham said, creating these color fonts is a manual
process, since no font editing software supports the new tables or
rendering in color. But Microsoft is moving forward with the fonts
for Windows 8.1, and believes the system is flexible enough to be used
for other multi-color symbols or for general typographic usage. Currently,
Microsoft's proposal is under discussion on the mpeg-OTspec mailing list; one must join the list to read
the messages or the posted files (including the specification), but
joining is not restricted or moderated. Microsoft has also announced
its intention to formally submit the proposal for standardization by
the ISO, which maintains OpenType as ISO/IEC 14496-22.
SVG
While Microsoft's solution is sure to reach users in short order,
it is not the only proposed system for adding color glyph support to
OpenType. There is one competing extension that can also be tested
today: the SVG glyphs for OpenType proposal, which is being developed
by a World Wide Web Consortium (W3C) community group
with most of the input coming from Adobe and Mozilla. Mozilla and
Adobe had each drafted its own proposal, but the two were merged into
a single draft
specification in July 2013.
The draft adds a single table to OpenType named svg. This
table can contain any number of SVG objects, and an index that
associates each object with a glyph encoding slot in the font. In
that sense, it differs considerably from Microsoft's specification,
because it references a completely different data type for the "color
version" of each supported glyph. There is also a color-palettes
subtable, which lists one or more sets of RGBA colors that can be used
to draw the SVG elements.
Just before TypeCon, Mozilla's Robert O'Callahan posted
an update on the draft proposal. O'Callahan has added support for
SVG-in-OpenType fonts to his personal
builds of Firefox, and has also made a web
application to add SVG support to existing OpenType font files.
The most frequently-cited advantage of the SVG-based approach is
that, by adopting an existing and well-known standard for the color
elements, OpenType avoids defining yet another graphics format.
The software stack would have to be modified in order to render the
SVG glyphs, but it could simply hand that duty off to the system's
existing SVG renderer. In comparison, the Microsoft colr
approach would require modification to the text rendering process as
well, but with entirely new code.
In addition, although at the moment the emphasis of both proposals
is on relatively simple, flat-color rendering, the SVG proposal can
also include other design features that would be of appeal to graphic
designers, such as color gradients, without further alteration. It
can also incorporate animated elements, which seems to be regarded by
many as the next step in the "evolution" of emoji. The principal
drawback, of course, is that SVG itself is a large
and complex standard. Incorporating SVG in OpenType fonts would by
necessity require defining a "safe" subset of SVG to support; no one
is keen on the notion of embedding streaming video elements into font
glyphs.
Or at least no one seems interested in such a feature today.
The future
The reality on the ground, though, is that no one is quite sure what type
designers will come up with when creating chromatic fonts with either
proposal. Gradients, of course, seem like a natural fit; one can
already see graphic designers applying gradient effects to plenty of
typefaces on the web and in print, and it is a clear analog to the
effects that can be produced with ink and a letterpress. Animation is
not quite as easy to predict; it is simple to imagine terrible
eyesores created with animated fonts, but then again one can already design
terrible eyesores with any number of existing web standards today.
Both proposals have a long road ahead before making it into a
standard. Microsoft, of course, is marching ahead with its own
implementation in the meantime. The SVG-glyphs-for-OpenType group
held a meeting during TypeCon (which I did not attend); there is more
discussion scheduled for the coming weeks and months. As a practical
matter, what the industry's type designers and graphic designers find
more useful is ultimately what matters the most, but it could still be
many years before chromatic fonts become widespread. On Sunday, type
designer Crystian Cruz gave a presentation showcasing many of the
advanced effects that are already possible with OpenType's existing
features—including randomization, contextual substitutions, and
character forms that adapt to fit the available space. But the
majority of these features are rarely, if ever, employed by existing
fonts, and they have scant support in software. Whether the
popularity of emoji is enough to spark a larger revolution in
chromatic fonts remains to be seen.
Comments (22 posted)
By Jonathan Corbet
August 27, 2013
File management does not seem to be a one-size-fits-all proposition. Thus,
while general-purpose file managers exist, it often appears that much of
the development effort goes into domain-specific management tools. We
have a whole set of photo management applications, for example, and even more
music managers. When it comes to electronic books, though, there seems to
only be one viable project out there:
Calibre. LWN last
looked at Calibre almost exactly two years
ago. The recent
version 1.0
release provides an obvious opportunity to see what has been happening
with this fast-moving utility.
While installing the new version, your editor quickly learned that some
things have not changed. The download page still
recommends against using versions of the program packaged by distributors,
"as those are often buggy/outdated." Instead, one is supposed
to install the project's binary release, done by instructing a Python
interpreter, running as root, to fetch a program from a web site and
execute it, sight unseen. After all, what could possibly go wrong?
For those who might balk at handing the keys to their
systems to a remote site in that way, it is also possible to download a
simple tarball and run the program from there. Source is also available,
naturally, but the build process has a lot of dependencies and is not for
those in a hurry.
One unfortunate antifeature that has not changed is the program's habit of
phoning home on startup, providing the user's IP address, system type,
Calibre version, and an identifying UUID to a Calibre server. One assumes
that most distributors have disabled this reporting, but binaries retrieved
directly from the site still do it.
On the other hand, it's worth noting that Calibre is good about not
breaking things with its
(frequent) updates. Plugins installed years ago still work without
problems (or even rebuilding), for example, and your editor's book library
itself has never had trouble with Calibre updates. Calibre is developing
rapidly, but some care has clearly gone into avoiding pain for users
when they upgrade.
New in 1.0
In one sense, there is not a whole lot in the 1.0 release itself that calls
for the big version number bump. Evidently, it was simply time to declare
that Calibre had reached that milestone. That said, there are a couple of
new features in this release, the most visible of which is the grid view
for books in the library. This view, as one might expect, simply shows
book covers in a matrix arrangement; it joins the longstanding list view
and the relatively useless animated "cover browser" as the third way of
selecting a book from the library. The grid view is nice; it can be
thought of as analog to arranging one's books with the cover forward, while
the list view is like looking at the spines.
The 1.0 release also advertises a much faster database backend for managing
book information. With a library containing several dozen books, it is
hard to notice the difference; those with much larger libraries may have a
different experience. But large libraries present challenges of their own
in Calibre; the interface feels increasingly unwieldy as the number of
books grows. A set of bookshelves can be organized as the owner wishes; a
Calibre library is somewhat less flexible. There is, for example, no
mechanism for imposing any sort of hierarchy on books. Yes, this is the
21st century, and we all use tags for everything now, but, as Herbert Simon
pointed out years ago in The Sciences
of the Artificial, human
brains think in terms of hierarchies.
That said, one helpful feature that has been added (in the 0.9.28 release
in April 2013) is "virtual libraries," which are a sort of saved-search
mechanism. One can create a virtual library containing only books from a
specific author, say, or based on tags. Virtual libraries don't really
provide much that Calibre's search mechanism didn't already do, but they
make frequent searches easier and faster to get to.
Calibre has long been the definitive tool for the conversion of books
between formats. Over time it has been gaining features for the editing of
books as well. New in 1.0 is an editor allowing the modification of the
table of contents; it can perform simple tasks like adding an entry for an
interesting location, or it can be used to create a new table of contents
from the beginning. The automatic metadata downloading system can look
further afield for book covers, descriptions, and more. The embedding of
fonts within books is now fully supported. On the input side, it is now
possible to import books in the Microsoft Word (.docx) format; for output,
the PDF generation engine has been rewritten with, it is claimed, much
better results.
Beyond that, of course, the program continues to gain support for more
devices, more ebook formats, and more web sites as sources of content.
Calibre's ability to
download the contents of an online magazine issue and package them into a
book on a reader device remains one of its more useful feature; it makes
offline reading easy. In general, the addition of features continues at
such a pace that many of them are hidden at the outset and must be manually
added to a toolbar (by way of the "preferences" screen) to be accessible.
Suffice to say that Calibre developer
Kovid Goyal does not seem to agree with the minimalist line of thought —
with regard to features or configuration options — that seems to dominate
some other projects.
All those features and options are certainly welcome, but there is no doubt
that Calibre could benefit from some focused design and usability work.
The interface is cluttered and useful functionality can be hard to find.
Who can say, off the top of their head, why there are two search
bars on the main screen? (One of them is actually a place to enter a name
to attach to a "saved search"). Or consider a dialog like the one shown to
the right.
What line of reasoning would lead to the "cancel" button being placed right
in the middle of all the other options? In general, Calibre gives the
impression of being a large collection of (mostly useful) tools haphazardly
thrown into a big box.
Finally, one significant "new" feature (new since the last review, but it was
available in 0.9) is proper support for Android devices. Linux systems
still struggle to support the MTP protocol in any sort of general way, but
Calibre's internal MTP implementation seems to work just fine. Calibre
recognizes Android devices when they are plugged in, and can tell multiple
devices apart; it can be configured to manage or ignore each device
independently. There is a simple rule system that allows different types
of files to be directed to different locations on the device. It all works
quite well, though interoperability with the Android Kindle app is
minimal. The third-party
plugins that deal so nicely with books on an actual Kindle device are
unable to read Kindle books on an Android device.
DRM
In general, of course, DRM continues to be one of the biggest disincentives
for anybody considering the purchase of an electronic book. It should be
possible to buy a book and read it using the device — and the software — of
one's choice. One should not have to worry about things like having
one's books deleted as a consequence of crossing a national border.
Books should not be subject to lock-in to a particular device or the
continuing good will of a remote corporation. Someday, hopefully, the book
industry will free itself of DRM; until then, people who are unwilling to
break the DRM on books they buy will not be able to make full use of a
library manager like Calibre.
One should note, of course, that there are quite a few
publishers selling DRM-free electronic books now. Patronizing those
publishers seems like a good way to ensure that they continue to make
DRM-free books available. Calibre, of course, has long had a built-in "get
books" feature that can search for electronic books from a large number of
sellers.
In summary: Calibre remains the definitive tool for the management of
electronic books for Linux — or for any other operating system. The needed
functionality is there, and it continues to develop at a fast pace.
Indeed, the pace is
surprisingly fast given that the lead developer is responsible for the vast
majority of the work — nearly 75% of the commits in the Calibre git repository.
Perhaps, someday, somebody will set out to create something that is
prettier, perhaps based on Calibre's book management and format-conversion
engine. But, for now, Calibre seems to be the only show in town; happily,
it is a highly functional and useful tool.
Comments (26 posted)
Page editor: Jonathan Corbet
Security
By Jake Edge
August 28, 2013
There's been a lot of talk about reproducible (or deterministic) builds
recently for the purposes of verifying
that binaries come from the "right" source code. It's particularly
topical right now, at least in part because of the NSA spying disclosures
coupled with the concern that various
governments are actively trying to backdoor applications (especially
security applications). So, the Tor project and others (e.g. Bitcoin) have
been working on ways to create
reproducible builds.
But reproducible builds of necessity create predictable binaries. That
gives an attacker information about the layout and organization of the
code that can be used for return-oriented
programming (ROP) attacks. An alternative is to introduce
random changes into a binary as it is built to make these kinds of
attacks more difficult. Stephen Crane recently suggested
adding two kinds of code generation randomness into the LLVM compiler
framework in a post to the LLVMdev mailing list.
As part of a team at the University of California, Irvine,
Crane has been working on adding several kinds of randomness into
binaries. He proposed that the team submit patches for two types of
randomness for LLVM.
The first is "NOP insertion", which adds NOPs (i.e. no ops) between machine
instructions. The second is "scheduling randomization", which discards the
existing instruction
scheduling heuristics and randomly schedules any valid
instruction at each point. The result is a binary that still runs
correctly, is "slightly
slower", but is far more resistant to ROP attacks. It is a
"simplified subset" of the work described in a paper
[PDF] by the team.
The technique is in some ways analogous to address-space layout
randomization (ASLR). In both cases, the layout of the code is altered
such that an attacker cannot predict where code of interest will live in
memory. Either can be defeated by attackers that have access to certain kinds
of information. For ASLR, determining the address of a library function in the
running executable is generally enough to defeat it. For randomized binaries, the
attacker would need to have read access to the binary itself to find the pieces
needed for an exploit.
ROP attacks use pieces of existing code in a binary to perform their
malicious task. By finding little snippets of code (typically ending in a
return) and calling them in the right order, the attack can perform
any operation that it needs to. ROP techniques came about after operating
systems started marking data as non-executable to thwart buffer overflows
and the like. Using ROP techniques, buffer overflows can still be used,
but without executing any code on the stack.
Crane noted that there are other randomizations that the team has worked
on, but that they planned to start small when proposing patches. Nadav
Rotem asked about register allocation
randomization, for example, which Crane said could be added to the patch submission.
The patched compiler passes the existing LLVM test suite on x86_64, Crane
said. Implementing the changes for ARM is also underway.
Nick Kledzik asked how a software
distributor might be able to deliver randomized binaries, given that they
normally create a single binary that gets delivered to all of their
users. Crane had some thoughts on that,
including building multiple or individualized ("watermarked"
for example) binaries. For open source, especially for security-sensitive
binaries, users can just build their own to significantly raise the bar for
attacks. Crane noted that ROP attacks can be used for jailbreaking. That
might make the techniques of particular interest to LLVM sponsor Apple.
Security is always about trade-offs, and randomized binaries are just further
confirmation of that. Diverse binaries would make verification of the
correspondence between
source and binary much more difficult but would also make ROP
attacks harder. Given that most free software these days is built
with GCC, it would be nice to see similar patches for that compiler suite.
In any case, randomized binaries will soon be another tool available for
the security-sensitive.
Comments (19 posted)
Brief items
Consider the following hypothetical example: A young woman calls her
gynecologist; then immediately calls her mother; then a man who, during the
past few months, she had repeatedly spoken to on the telephone after 11pm;
followed by a call to a family planning center that also offers
abortions. A likely storyline emerges that would not be as evident by
examining the record of a single telephone call.
—
Ed
Felten [PDF] in a declaration on the dangers of "it's just metadata"
National Security Agency officers on several occasions have channeled their
agency’s enormous eavesdropping power to spy on love interests,
U.S. officials said.
The practice isn't frequent — one official estimated a handful of cases in the last decade — but it's common enough to garner its own spycraft label: LOVEINT.
—
The Wall Street Journal
So we're left with an agency that collects a ridiculous amount of info, and has around 1,000 employees (who are mostly actually employed by outside contractors) who can look through anything with no tracking, leaving no trace, and we're told that the data isn't abused. Really? Do Keith Alexander, James Clapper, President Obama, Dianne Feinstein and Mike Rogers really believe that none of those 1,000 sys admins have ever abused the system? And, do they believe that none of the people whom those thousand sys admins are friends with haven't had their friend "check out" information on someone else? Hell, imagine you were someone at the NSA who understood all of this already. If you wanted to abuse the system, why not befriend a sys admin and let him or her do the dirty work for you -- knowing that there would be no further trace?
Basically, it seems clear that the NSA has simply no idea how many abuses there were, and there are a very large number of people who had astounding levels of access and absolutely no controls or way to trace what they were doing.
—
Mike
Masnick
The chilling of free speech isn't just a consequence of surveillance. It's
also a motive. We adopt the art of self-censorship, closing down blogs,
watching what we say on Facebook, forgoing "private" email for fear that
any errant word may come back to haunt us in one, five or fifteen
years. "The mind's tendency to still feel observed when alone... can be
inhibiting,"
writes Janna Malamud Smith. Indeed.
—
Josh
Levy
Comments (9 posted)
Mike Perry
writes
about the motivations behind his deterministic build work on the Tor
Project blog. "
Current popular software development practices simply
cannot survive targeted attacks of the scale and scope that we are seeing
today. In fact, I believe we're just about to witness the first examples of
large scale 'watering hole' attacks. This would be malware that attacks the
software development and build processes themselves to distribute copies of
itself to tens or even hundreds of millions of machines in a single,
officially signed, instantaneous update. Deterministic, distributed builds
are perhaps the only way we can reliably prevent these types of targeted
attacks in the face of the endless stockpiling of weaponized exploits and
other 'cyberweapons'."
Comments (7 posted)
New vulnerabilities
chromium: multiple vulnerabilities
| Package(s): | chromium-browser |
CVE #(s): | CVE-2013-2887
CVE-2013-2900
CVE-2013-2901
CVE-2013-2902
CVE-2013-2903
CVE-2013-2904
CVE-2013-2905
|
| Created: | August 26, 2013 |
Updated: | September 18, 2013 |
| Description: |
From the CVE entries:
Multiple unspecified vulnerabilities in Google Chrome before 29.0.1547.57 allow attackers to cause a denial of service or possibly have other impact via unknown vectors. (CVE-2013-2887)
The FilePath::ReferencesParent function in files/file_path.cc in Google Chrome before 29.0.1547.57 on Windows does not properly handle pathname components composed entirely of . (dot) and whitespace characters, which allows remote attackers to conduct directory traversal attacks via a crafted directory name. (CVE-2013-2900)
Multiple integer overflows in (1) libGLESv2/renderer/Renderer9.cpp and (2) libGLESv2/renderer/Renderer11.cpp in Almost Native Graphics Layer Engine (ANGLE), as used in Google Chrome before 29.0.1547.57, allow remote attackers to cause a denial of service or possibly have unspecified other impact via unknown vectors. (CVE-2013-2901)
Use-after-free vulnerability in the XSLT ProcessingInstruction implementation in Blink, as used in Google Chrome before 29.0.1547.57, allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors related to an applyXSLTransform call involving (1) an HTML document or (2) an xsl:processing-instruction element that is still in the process of loading. (CVE-2013-2902)
Use-after-free vulnerability in the HTMLMediaElement::didMoveToNewDocument function in core/html/HTMLMediaElement.cpp in Blink, as used in Google Chrome before 29.0.1547.57, allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors involving moving a (1) AUDIO or (2) VIDEO element between documents. (CVE-2013-2903)
Use-after-free vulnerability in the Document::finishedParsing function in core/dom/Document.cpp in Blink, as used in Google Chrome before 29.0.1547.57, allows remote attackers to cause a denial of service or possibly have unspecified other impact via an onload event that changes an IFRAME element so that its src attribute is no longer an XML document, leading to unintended garbage collection of this document. (CVE-2013-2904)
The SharedMemory::Create function in memory/shared_memory_posix.cc in Google Chrome before 29.0.1547.57 uses weak permissions under /dev/shm/, which allows attackers to obtain sensitive information via direct access to a POSIX shared-memory file. (CVE-2013-2905) |
| Alerts: |
|
Comments (none posted)
condor: denial of service
| Package(s): | condor |
CVE #(s): | CVE-2013-4255
|
| Created: | August 22, 2013 |
Updated: | August 28, 2013 |
| Description: |
From the Red Hat advisory:
A denial of service flaw was found in the way HTCondor's policy definition
evaluator processed certain policy definitions. If an administrator used an
attribute defined on a job in a CONTINUE, KILL, PREEMPT, or SUSPEND
condor_startd policy, a remote HTCondor service user could use this flaw to
cause condor_startd to exit by submitting a job that caused such a policy
definition to be evaluated to either the ERROR or UNDEFINED states.
(CVE-2013-4255) |
| Alerts: |
|
Comments (none posted)
glibc: multiple vulnerabilities
| Package(s): | glibc |
CVE #(s): | CVE-2012-4412
CVE-2012-4424
CVE-2013-2207
CVE-2013-4237
|
| Created: | August 22, 2013 |
Updated: | September 5, 2013 |
| Description: |
From the Fedora advisory:
CVE-2012-4412 glibc: strcoll() integer overflow leading to buffer overflow
CVE-2012-4424 glibc: alloca() stack overflow in the strcoll() interface
CVE-2013-2207 glibc (pt_chown): Improper pseudotty ownership and permissions changes when granting
access to the slave pseudoterminal
CVE-2013-4237 glibc: Buffer overwrite when using readdir_r on file systems returning file names
longer than NAME_MAX characters
|
| Alerts: |
|
Comments (none posted)
kernel: two vulnerabilities
| Package(s): | kernel |
CVE #(s): | CVE-2013-0343
CVE-2013-4254
|
| Created: | August 23, 2013 |
Updated: | September 26, 2013 |
| Description: |
From the Red Hat bugzilla entries [1, 2]:
CVE-2013-4254: Linux kernel built for the ARM(CONFIG_ARM/CONFIG_ARM64) platforms along with the
hardware performance counter support(CONFIG_HW_PERF_EVENTS) is vulnerable to a
NULL pointer dereference flaw. This could lead to the kernel crash resulting in
DoS or potential privilege escalation to gain root privileges by a non-root user.
An unprivileged user/program could use this flaw to crash the kernel resulting
in DoS or potential privilege escalation to gain root access to a machine.
CVE-2013-0343:
Due to the way the Linux kernel handles the creation of IPv6 temporary
addresses a malicious LAN user can remotely disable them altogether
which may lead to privacy violations and information disclosure.
Reference:
http://seclists.org/oss-sec/2012/q4/292
http://seclists.org/oss-sec/2013/q1/92 |
| Alerts: |
|
Comments (none posted)
kfreebsd-9: privilege escalation/information leak
| Package(s): | kfreebsd-9 |
CVE #(s): | CVE-2013-3077
CVE-2013-4851
CVE-2013-5209
|
| Created: | August 27, 2013 |
Updated: | August 28, 2013 |
| Description: |
From the Debian advisory:
CVE-2013-3077:
Clement Lecigne from the Google Security Team reported an integer
overflow in computing the size of a temporary buffer in the IP
multicast code, which can result in a buffer which is too small
for the requested operation. An unprivileged process can read or
write pages of memory which belong to the kernel. These may lead
to exposure of sensitive information or allow privilege
escalation.
CVE-2013-4851:
Rick Macklem, Christopher Key and Tim Zingelman reported that the
FreeBSD kernel incorrectly uses client supplied credentials
instead of the one configured in exports(5) when filling out the
anonymous credential for a NFS export, when -network or -host
restrictions are used at the same time. The remote client may
supply privileged credentials (e.g. the root user) when accessing
a file under the NFS share, which will bypass the normal access
checks.
CVE-2013-5209:
Julian Seward and Michael Tuexen reported a kernel memory
disclosure when initializing the SCTP state cookie being sent in
INIT-ACK chunks, a buffer allocated from the kernel stack is not
completely initialized. Fragments of kernel memory may be
included in SCTP packets and transmitted over the network. For
each SCTP session, there are two separate instances in which a
4-byte fragment may be transmitted.
This memory might contain sensitive information, such as portions
of the file cache or terminal buffers. This information might be
directly useful, or it might be leveraged to obtain elevated
privileges in some way. For example, a terminal buffer might
include an user-entered password. |
| Alerts: |
|
Comments (none posted)
lcms: buffer overflows
| Package(s): | lcms |
CVE #(s): | CVE-2013-4276
|
| Created: | August 27, 2013 |
Updated: | August 28, 2013 |
| Description: |
From the Mageia advisory:
Three buffer overflows in Little CMS version 1.19 could possibly be
exploited through user input. |
| Alerts: |
|
Comments (none posted)
nmap: arbitrary file upload flaw
| Package(s): | nmap |
CVE #(s): | CVE-2013-4885
|
| Created: | August 28, 2013 |
Updated: | August 28, 2013 |
| Description: |
From the nmap advisory:
It is possible to write arbitrary files to a remote system, through a
specially crafted server response for NMAP http-domino-enum-passwords.nse
script (from the official Nmap repository). |
| Alerts: |
|
Comments (none posted)
php: multiple vulnerabilities
| Package(s): | php |
CVE #(s): | CVE-2013-4248
CVE-2011-4718
|
| Created: | August 26, 2013 |
Updated: | September 9, 2013 |
| Description: |
From the CVE entries:
Session fixation vulnerability in the Sessions subsystem in PHP before 5.5.2 allows remote attackers to hijack web sessions by specifying a session ID. (CVE-2011-4718)
The openssl_x509_parse function in openssl.c in the OpenSSL module in PHP before 5.4.18 and 5.5.x before 5.5.2 does not properly handle a '\0' character in a domain name in the Subject Alternative Name field of an X.509 certificate, which allows man-in-the-middle attackers to spoof arbitrary SSL servers via a crafted certificate issued by a legitimate Certification Authority, a related issue to CVE-2009-2408. (CVE-2013-4248) |
| Alerts: |
|
Comments (none posted)
poppler: code execution
| Package(s): | poppler |
CVE #(s): | CVE-2012-2142
|
| Created: | August 22, 2013 |
Updated: | October 1, 2013 |
| Description: |
From the openSUSE advisory:
PDF files
could emit messages with terminal escape sequences which
could be used to inject shell code if the user ran a PDF
viewer from a terminal shell (CVE-2012-2142).
|
| Alerts: |
|
Comments (3 posted)
python: man in the middle attack
| Package(s): | python |
CVE #(s): | CVE-2013-4238
|
| Created: | August 26, 2013 |
Updated: | October 1, 2013 |
| Description: |
From the CVE entry:
The ssl.match_hostname function in the SSL module in Python 2.6 through 3.4 does not properly handle a '\0' character in a domain name in the Subject Alternative Name field of an X.509 certificate, which allows man-in-the-middle attackers to spoof arbitrary SSL servers via a crafted certificate issued by a legitimate Certification Authority, a related issue to CVE-2009-2408. |
| Alerts: |
|
Comments (none posted)
python-django: cross-site scripting
| Package(s): | python-django |
CVE #(s): | CVE-2013-4249
|
| Created: | August 23, 2013 |
Updated: | September 3, 2013 |
| Description: |
From the Red Hat bugzilla entry:
When displaying the value of a URLField -- a model field type for storing URLs -- this interface treated the values of such fields as safe, thus failing to properly accommodate the potential for dangerous values. A proof-of-concept application has been provided to the Django project, showing how this can be exploited to perform XSS in the administrative interface.
In a normal Django deployment, this will only affect the administrative interface, as the incorrect handling occurs only in form-widget code in django.contrib.admin. It is, however, possible that other applications may be affected, if those applications make use of form widgets provided by the admin interface. |
| Alerts: |
|
Comments (none posted)
tiff: code execution
| Package(s): | tiff |
CVE #(s): | CVE-2013-4244
|
| Created: | August 28, 2013 |
Updated: | September 18, 2013 |
| Description: |
From the Debian advisory:
Pedro Ribeiro and Huzaifa S. Sidhpurwala discovered multiple
vulnerabilities in various tools shipped by the tiff library. Processing
a malformed file may lead to denial of service or the execution of
arbitrary code. |
| Alerts: |
|
Comments (none posted)
wireshark: multiple vulnerabilities
Comments (none posted)
Page editor: Jake Edge
Kernel development
Brief items
The current development kernel is 3.11-rc7,
released on
August 25 with a special announcement: "
I'm doing a (free)
operating system (just a hobby, even if it's big and professional) for 486+
AT clones and just about anything else out there under the sun. This has
been brewing since april 1991, and is still not ready. I'd like any
feedback on things people like/dislike in Linux 3.11-rc7." He is, of
course, celebrating the 22nd anniversary of
his
original "hello everybody" mail.
Stable updates: no stable updates have been released in the last
week. As of this writing, the
3.10.10,
3.4.60, and
3.0.94 updates are in the review process;
they can be expected on or after August 29.
Comments (none posted)
It can be hard to realize "this piece of your code is utter
*expletive*" actually means "I think the general concept is okay,
but you need to rethink this piece of your code, and I think you
can make it work."
On the other hand, after grad school, this is second nature.
—
Andy Lutomirski
Also historically ignoring hardware vendor management chains has
always led to better drivers. Not saying that some vendors don't
have competent engineers (many do, many don't), but as soon as
management gets involved in decisions about drivers all hope is
lost.
—
Christoph Hellwig
Comments (none posted)
By Jonathan Corbet
August 28, 2013
A few weeks back, we
noted the merging of a
patch adding support for the
flink() system call, which would
allow a program to link to a file represented by an open file descriptor.
This functionality had long been blocked as the result of security worries;
it went through this time on the argument that the kernel already provided
that feature via the
linkat() call. At that point, it seemed like
this feature was set for the 3.11 kernel.
Not everybody was convinced, though; in particular, Brad Spengler
apparently raised the issue of processes making links to file descriptors
that had been passed in from a more privileged domain. So Andy Lutomirski,
the author of the original patch, posted a
followup to restrict the functionality to files created with the
O_TMPFILE option. The only problem was that nobody much liked the
patch; Linus was reasonably clear about his
feelings.
What followed was a long discussion on how to better solve the problem,
with a number of patches going back and forth. About the only thing that
became clear was that the best solution was not yet clear. So, on
August 28, Linus reverted the original
patch, saying "Clearly we're not done with this discussion, and
the patches flying around aren't appropriate for 3.11" So the
flink() functionality will have to wait until at least 3.12.
Comments (none posted)
Kernel development news
By Jonathan Corbet
August 28, 2013
One might think that, by now, we would have a pretty good idea of how to
optimally manage data streams with the TCP protocol. In truth, there still
seems to be substantial room for improvement. Larger problems like
bufferbloat have received a lot of
attention recently, but there is ongoing work in other aspects of
real-world networking as well. A couple of patches posted recently by Eric
Dumazet show the kind of work that is being done.
TSO sizing
TCP segmentation offload (TSO) is a hardware-assisted technique for
improving the performance of outbound data streams. As a stream of data (a
"flow") is
transmitted, it must be broken up into smaller segments, each of which fits
into one packet. A network interface that supports TSO can accept a large
buffer of data and do this segmentation work within the hardware. That
reduces the number of operations performed and interrupts taken by the host
CPU, making the transmission process more efficient. The use of techniques
like TSO makes it possible for Linux systems to run high-end network
interfaces at their full speed.
A potential problem with TSO is that it can cause the interface to dump a
large number of packets associated with a single stream onto the wire in a
short period of time. In other words, data transmission can be bursty,
especially if the overall flow rate for the connection is not all that
high. All of those packets will just end up sitting in a buffer somewhere,
contributing to bufferbloat and increasing the chances that some of those
packets will be dropped. If those packets were transmitted at a more
steady pace, the stress on the net as a whole would be reduced and
throughput could well increase.
The simple TSO automatic sizing patch
posted by Eric (with a Cc to Van Jacobson at a new google.com address)
tries to spread out transmissions in just that way. There are two changes
involved, the first of which is to make intelligent choices about how much
data should be handed to the interface in a single TSO transmission.
Current kernels will provide a maximally-sized buffer — usually 64KB — to
be transmitted all at once. With the automatic sizing patch, that buffer
size is reduced to an amount that, it is estimated, will take roughly 1ms
to transmit at the current flow rate. In this way, each transmission will
produce a smaller burst of data if the flow rate is not high.
The other piece of the puzzle is called "TCP pacing"; it is a TCP
implementation change intended to set
the pace at which packets are transmitted to approximately match the pace
at which they can get through the network. The existing TCP flow control
mechanisms tell each endpoint how much data it is allowed to transmit, but
they don't provide a time period over which that transmission should be
spread, so, once again, the result tends to be bursts of packets.
TCP pacing ensures that packets
are transmitted at a rate that doesn't cause them to pile up in buffers
somewhere between the two endpoints. It can, of course, also be used to
restrict the data rate of a given flow to something lower than what the
network could handle, but that is not the objective of this patch.
Interestingly, the patch does not actually implement pacing, which would
add some significant complexity to the TCP stack — code that does not really
need more complexity. All it does is to calculate the rate that should be
used, in the hope that some other level of the stack can then enforce that
rate. For now, that other part would appear to be the new "fair queue" packet scheduler.
The FQ scheduler
A packet scheduler is charged with organizing the flow of packets through
the network stack to meet a set of policy objectives. The kernel has quite
a few of them, including CBQ for fancy class-based routing, CHOKe for routers, and a couple of variants on
the CoDel queue management algorithm. FQ
joins this list as a relatively simple scheduler designed to implement fair
access across large numbers of flows with local endpoints while keeping
buffer sizes down; it also happens to implement TCP pacing.
FQ keeps track of every flow it sees passing through the system. To do so,
it calculates an eight-bit hash based on the socket associated with the
flow, then uses the result as an index into an array of red-black trees.
The data structure is designed, according to Eric, to scale well up to
millions of concurrent flows. A number of parameters are associated with
each flow, including its current transmission quota and, optionally, the
time at which the next packet can be transmitted.
That transmission time is used to implement the TCP pacing support. If a
given socket has a pace specified for it, FQ will calculate how far the
packets should be spaced in time to conform to that pace. If a flow's next
transmission time is in the future, that flow is added to another red-black
tree with the transmission time used as the key; that tree, thus, allows
the kernel to track delayed flows and quickly find the one whose next
packet is due to go out the soonest. A single timer is then used, if
needed, to ensure that said packet is transmitted at the right time.
The scheduler maintains two linked lists of active flows, the "new" and
"old" lists. When a flow is first encountered, it is placed on the new
list. The packet dispatcher services flows on the new list first; once a
flow uses up its quota, that flow is moved to the old list. The idea here
appears to be to give preferential treatment to new, short-lived
connections — a DNS lookup or HTTP "GET" command, for example — and not let
those connections be buried underneath larger, longer-lasting flows.
Eventually the scheduler works its way through all active flows, sending a
quota of data from each; then the process starts over.
There are a number of additional details, of course. There are limits on
the amount of data queued for each flow, as well as a limit on the amount
of data buffered within the scheduler as a whole; any packet that would
exceed one of those limits is dropped. A special "internal" queue exists
for high-priority traffic, allowing it to reach the wire more quickly. And
so on.
One other detail is garbage collection. One problem with this kind of flow
tracking is that nothing tells the scheduler when a particular flow is shut
down; indeed, nothing can tell the scheduler for flows without local
endpoints or for non-connection-oriented protocols. So the scheduler must
figure out on its own when it can stop
tracking any given flow. One way to do that would be to drop the flow as
soon as there are no packets associated with it, but that would cause some
thrashing as the queues empty and refill; it is better to keep flow data
around for a little while in anticipation of more traffic. FQ handles this
by putting idle flows into a special "detached" state, off the lists of
active flows. Whenever a new flow is added, a pass is made over the
associated red-black tree to clean out flows that have been detached for a
sufficiently long time — three seconds in the current patch.
The result, says Eric, is fair scheduling of packets from any number of
flows with nice spacing in situations where pacing is being used. Given
that the comments so far have been mostly concerned with issues like white
space, it seems unlikely that anybody is going to disagree with its
merging. So TCP pacing and the FQ scheduler seem like candidates for the
mainline in the near future — though the upcoming 3.12 cycle may be just a
bit too near at this point.
Comments (4 posted)
By Jake Edge
August 28, 2013
Lightweight virtualization using containers is a technique that has finally
come together for Linux, though there are still some rough edges that may
need filing down. Containers are created by using two
separate
kernel features: control groups (cgroups) and namespaces. Cgroups are in the process of being revamped, while there may still need to
be more namespaces added to the
varieties currently
available. For example, there is no way to separate most devices into their own
namespace. That's a hole that Oren Laadan would like to see filled, so he put
out an RFC
on device namespaces recently.
Namespaces partition global system resources so that different sets of
processes have their own view of those resources. For example, mount
namespaces partition the mounted filesystems into different views, with the
result that processes in one namespace are unable to see or interact with
filesystems that are only mounted in another. Similarly, PID
namespaces give each namespace its own set of process IDs (PIDs).
Certain devices are currently handled by their own namespace or similar
functionality: network namespaces for network devices and the
devpts pseudo-filesystem for pseudo-terminals (i.e. pty). But there
is no way to partition the view of all devices in the system, which is what
device
namespaces would do.
The motivation for the feature is to allow multiple virtual phones on a
single physical phone. For example, one could have two complete Android
systems running on the phone, with one slated for work purposes, and the
other for personal uses. Each system would run in its own container that
would be isolated from the other. That isolation would allow companies to
control the apps and security of the "company half" of a phone, while
allowing the user to keep their personal information separate. A video gives an overview of the
idea. Much of that separation can be done today, but
there is a missing piece: virtualizing the devices (e.g. frame buffer,
touchscreen, buttons).
The proposal adds the concept of an "active" device namespace,
which is
the one that the user is currently interacting with. The upshot is that a
user could switch between the phone personalities (or personas) as easily
as they switch between apps today. Each personality would have access to
all of the capabilities of the phone while it was the active namespace, but
none while it was the inactive (or background) namespace.
Setting up a device namespace is done in the normal way, using the clone(),
setns(), or unshare() system
calls. One surprise is that there is no new CLONE_* flag added for
device namespaces, and the CLONE_NEWPID flag is overloaded. A
comment in the code explains why:
/*
* Couple device namespace semantics with pid-namespace.
* It's convenient, and we ran out of clone flags anyway.
*/
While coupling PID and device namespaces may work, it does seem like some
kind of long-term solution to the clone flag problem is required. Once a
process has been put into a device namespace, any
open() of a
namespace-aware device will restrict that device to the namespace.
At some level, adding device namespaces is simply a matter of virtualizing
the major and minor
device numbers so that each namespace has its own set of them. The
major/minor numbers in a namespace would correspond to the driver loaded
for that
namespace. Drivers that might be available to multiple namespaces would need
to be changed to be namespace-aware. For some kinds of drivers, for
example those
without any real state (e.g. for Android, the LED
subsystem or the backlight/LCD
subsystem), the changes would be minimal—essentially just a test. If
the namespace that contains the device is the active one, proceed,
otherwise, ignore any requested
changes.
Devices, though, are sometimes stateful. One can't
suddenly switch sending frame buffer data mid-stream (or mix two streams)
and expect the screen
contents to stay coherent. So, drivers and subsystems will need to handle
the switching behavior. For example, the
framebuffer device should only reflect changes to the screen from the
active namespace, but it should buffer changes from the background
namespace so that those changes will be reflected in the display after a
switch.
Laadan and his colleagues at Cellrox
have put together a set
of patches based on the 3.4 kernel for the Android emulator (goldfish).
There is also a fairly detailed description of the patches and the changes
made for both stateless and stateful devices. An
Android-based
demo that switches between a running phone and an app that displays a
changing color palette has also been created.
So far, there hasn't been much in the way of discussion of the idea on the
containers and lxc-devel mailing lists that the RFC was posted to. On one
hand, it makes sense to be able to virtualize all of the devices in a
system, but on the other that means there are a lot of drivers that
might need to change. There may be some "routing" issues to resolve, as
well—when the phone rings, which namespace handles it? The existing
proof-of-concept API for switching the
active namespace would also likely need some work.
While it may be a
worthwhile feature, it could also lead to a large ripple effect of driver changes. How device namespaces
fare in terms of moving toward the
mainline may well hinge on others stepping forward with additional use
cases. In the end, though, the core
changes to support the feature are fairly small, so the phone
personality use case might be enough all on its own.
Comments (4 posted)
By Jonathan Corbet
August 28, 2013
As a general rule, kernel developers prefer data structures that are
designed for readability and maintainability. When one understands the
data structures used by a piece of code, an understanding of the code
itself is usually not far away. So it might come as a surprise that one of
the kernel's most heavily-used data structures is also among its least
comprehensible. That data structure is
struct page, which
represents a page of physical memory. A recent patch set making
struct page even more complicated provides an excuse for a
quick overview of how this structure is used.
On most Linux systems, a page of physical memory contains 4096 bytes; that
means that a typical system contains millions of pages. Management of
those pages requires the maintenance of a page structure for each
of those physical pages. That puts a lot of pressure on the size of
struct page; expanding it by a single byte will cause the
kernel's memory use to grow by (possibly many) megabytes. That creates a
situation where almost any trick is justified if it can avoid making
struct page bigger.
Enter Joonsoo Kim, who has posted a patch
set aimed at squeezing more information into struct page
without making it any bigger.
In particular, he is concerned about the space occupied by
struct slab, which is used by the slab memory allocator (one
of three allocators that can be configured into the kernel, the others
being called SLUB and SLOB). A slab can
be thought of as one or more contiguous pages containing an array of
structures, each of which can be allocated separately; for example, the
kmalloc-64 slab holds 64-byte chunks used to satisfy
kmalloc() calls requesting between 32 and 64 bytes of space. The
associated slab structures are also used in great quantity;
/proc/slabinfo on your editor's system shows over 28,000 active
slabs for the ext4 inode
cache alone. A reduction in that space use would be welcome; Joonsoo
thinks this can be done — by folding the contents of
struct slab into the page structure representing the
memory containing the slab itself.
What's in struct page
Joonsoo's patch is perhaps best understood by stepping through struct
page and noting the changes that are made to accommodate the extra
data. The full definition of this structure can be found in
<linux/mm_types.h> for the curious. The first field appears
simple enough:
unsigned long flags;
This field holds flags describing the state of the page: dirty, locked,
under writeback, etc. In truth, though, this field is not as simple as it
seems; even the question of whether the kernel is running out of room for
page flags is hard to answer. See this
article for some details on how the flags field is used.
Following flags is:
struct address_space *mapping;
For pages that are in the page cache (a large portion of the pages on most
systems), mapping points to the information needed to access the
file that backs up the page. If, however, the page
is an anonymous page (user-space memory backed by swap), then
mapping will point to an anon_vma structure, allowing the
kernel to quickly find the page tables that refer to this page; see this article for a diagram and details. To
avoid confusion between the two types of page, anonymous pages will have
the least-significant bit set in mapping; since the pointer itself
is always aligned to at least a word boundary, that bit would otherwise be
clear.
This is the first place where Joonsoo's patch makes a change. The
mapping field is not currently used for kernel-space memory, so he
is able to use it as a pointer to the
first free object in the slab, eliminating the need to keep it in
struct slab.
Next is where things start to get complicated:
struct {
union {
pgoff_t index;
void *freelist;
bool pfmemalloc;
};
union {
unsigned long counters;
struct {
union {
atomic_t _mapcount;
struct { /* SLUB */
unsigned inuse:16;
unsigned objects:15;
unsigned frozen:1;
};
int units;
};
atomic_t _count;
};
};
};
(Note that this piece has been somewhat simplified through the removal of
some #ifdefs and a fair number of comments). In the first union,
index is used with page-cache pages to hold the offset into the
associated file. If, instead, the page is managed by the SLUB or SLOB
allocators,
freelist points to a list of free objects. The slab allocator
does not use freelist, but Joonsoo's patch makes slab use it the
same way the other allocators do. The pfmemalloc member, instead,
acts like a page flag; it is set on a free page if memory is tight and the
page should only be used as part of an effort to free more pages.
In the second union, both counters and the innermost anonymous
struct are used by the SLUB allocator, while units is
used by the SLOB allocator. The _mapcount and _count
fields are both usage counts for the page; _mapcount is the number
of page-table entries pointing to the page, while _count is a
general reference count. There are a number of subtleties around the use
of these fields, though, especially _mapcount, which helps with
the management of compound pages as well. Here, Joonsoo adds another field
to the second union:
unsigned int active; /* SLAB */
It is the count of active objects, again taken from struct slab.
Next we have:
union {
struct list_head lru;
struct {
struct page *next;
int pages;
int pobjects;
};
struct list_head list;
struct slab *slab_page;
};
For anonymous and page-cache pages, lru holds the page's position
in one of the least-frequently-used lists. The anonymous structure is used
by SLUB, while list is used by SLOB. The slab allocator uses
slab_page to refer back to the containing slab
structure. Joonsoo's patch complicates things here in an interesting way:
he overlays an rcu_head structure over lru to manage the
freeing of the associated slab using read-copy-update. Arguably that
structure should be added to the containing union, but the current code
just uses lru and casts instead. This trick will also involve
moving slab_page to somewhere else in the structure, but the
current patch set does not contain that change.
The next piece is:
union {
unsigned long private;
#if USE_SPLIT_PTLOCKS
spinlock_t ptl;
#endif
struct kmem_cache *slab_cache;
struct page *first_page;
};
The private field essentially belongs to whatever kernel subsystem
has allocated the page; it sees a number of uses throughout the kernel.
Filesystems, in particular, make heavy use of it. The ptl field
is used if the page is used by the kernel to hold page tables; it allows
the page table lock to be split into multiple locks if the number of CPUs
justifies it. In most configurations, a system containing four or more
processors will split the locks in this way. slab_cache is used
as a back pointer by slab and SLUB, while first_page is used
within compound pages to point to the first page in the set.
After this union, one finds:
#if defined(WANT_PAGE_VIRTUAL)
void *virtual;
#endif /* WANT_PAGE_VIRTUAL */
This field, if it exists at all, contains the kernel virtual address for
the page. It is not useful in many situations because that address is
easily calculated when it is needed. For systems where high memory is in
use (generally 32-bit systems with 1GB or more of memory), virtual
holds the address of high-memory pages that have been temporarily mapped
into the kernel with kmap(). Following private are a
couple of optional fields used when various debugging options are turned
on.
With the changes described above, Joonsoo's patch moves much of the
information previously kept in struct slab into the
page structure. The remaining fields are eliminated in other
ways, leaving struct slab with nothing to hold and, thus, no
further reason to exist. These structures are not huge, but, given that
there can be tens of thousands of them (or more) in a running system, the
memory savings from their elimination can be significant. Concentrating
activity on struct page may also have beneficial cache
effects, improving performance overall. So the patches may well be
worthwhile, even at the cost of complicating an already complex situation.
And the situation is indeed complex: struct page is a complicated
structure with a number
of subtle rules regarding its use. The saving grace, perhaps, is that it
is so heavily used that any kind of misunderstanding about the rules will
lead quickly to serious problems. Still, trying to put more information
into this structure is not a task for the faint of heart. Whether Joonsoo
will succeed remains to be seen, but he clearly is not the first to eye
struct page as a place to stash some useful memory management
information.
Comments (none posted)
Patches and updates
Kernel trees
- Sebastian Andrzej Siewior: 3.10.9-rt5 .
(August 23, 2013)
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Memory management
Networking
Architecture-specific
Security-related
Virtualization and containers
Miscellaneous
- Lucas De Marchi: kmod 15 .
(August 22, 2013)
Page editor: Jonathan Corbet
Distributions
By Jake Edge
August 28, 2013
The allure of "long-term support" (LTS) releases for a distribution is strong.
We have seen various distributions struggle with the idea of an LTS release
over
the years and, of
course, Ubuntu has five years of official support for its LTS. More recently,
Debian started discussing a "Debian LTS" on the debian-devel mailing list.
As is often the case, the "many years of support" idea may well run aground
due to a
typical
problem for community distributions: people power.
The posting of a June announcement
that the DreamHost hosting service would be moving from Debian to Ubuntu LTS
sparked the discussion. The switch was made because the "release
cycle for Debian is stable, but it's not long enough for us to focus on
stability", according to DreamHost. The support window for
a Debian release is one year after the next stable release, which generally
works out to around three years.
For some—DreamHost included—three years is not enough time. The main
problem for extending the Debian support window, unsurprisingly, is
security updates. Though, as Russ Allbery pointed out, some Ubuntu LTS users may not
actually be getting the full support they are expecting, because it is only
the packages in the "main" (and not "universe") repository that get that
support.
Rather, they just blindly assume that LTS having security
support for five years means that, as long as they regularly upgrade, they
don't have to worry about security.
They therefore end up running various non-core software with open security
vulnerabilities.
But that problem is not limited to Ubuntu, as Paul Wise noted. Currently, Iceweasel (Debian's version
of Firefox) is no longer supported for squeeze (Debian 6.0) and other
packages have fallen out of support during the one-year oldstable support
period in the past, he said. The sheer size of the Debian package
repository makes the support problem that much harder, which is presumably
why Ubuntu has focused its support on a smaller subset.
There also seems to be some disagreement among Debian developers with
regard to handling security updates. Some, like Pau Garcia i Quiles see security updates as the responsibility of
the package maintainer (with some assistance from the security team). Others, like Steve Langasek, believe it is a security team function with
input and assistance from the maintainer. In
the end, it may not really matter; without volunteers to prolong the
support, no LTS is possible. As Ian Jackson put it: "unless and until we have
people who volunteer to do the security support for an LTS, we won't
have an LTS".
There is more to it than that, however, according to Allbery. He argued that beyond two years, security support
is somewhat illusory, even for the enterprise distributions. Once the
upstream projects have moved on—and by two years after a release, they mostly
have—it becomes harder and harder to support older releases.
I'm painfully aware of the steep cliff
dropoff of upstream support for security fixes once you go beyond two
years after the release of the software. Beyond that point, even if you
do get security fixes, you're probably getting fixes that have been
backported by people with only a vague knowledge of the code, fixes that
often have not been thoroughly tested or tested by someone who uses the
software in question and that upstream has never looked at and will
disclaim any knowledge of or support for.
As painful as it is, if you are worried about security in a production
environment, falling more than a couple of years behind current
distribution releases is probably not in your best interests no matter
what security support you supposedly have.
Wookey took a different tack, however,
noting that organizations like DreamHost should, if they are interested in
longer-term
support, be funding the work. Philip Hands agreed, pointing out that it is "not an exciting task that
volunteers can be expected to flock to". He suggested that
interested companies band together:
If that made the somewhat more enlightened
companies band together to share the LTS workload amongst themselves
somehow (possibly by having a limited distribution model of some sort,
restricted to members of the mutual-support-club) then that would be no
bad thing either.
Hands also cautioned that community efforts at long-term support could be
detrimental to the distribution as a whole:
"I suppose it would be nice if we could provide this long term support,
but really it's going to consume scarce volunteer effort which will
almost certainly have a negative impact on the progress of Debian
proper." In many ways, that is the overall conclusion in the
thread. While there were numerous maintainers volunteering to support
their packages for longer, it really requires a distribution-wide
commitment (or that of a dedicated team), which doesn't seem to be appearing.
LTS releases are clearly popular with certain segments of the Linux
community, but it is a big effort—one that is difficult for a
distribution made up of volunteers to commit to. Even distributions that
are supported by companies, like Fedora, find that commitment to be
difficult, though Ubuntu LTS and openSUSE's Evergreen do provide
counterexamples. It doesn't seem unreasonable that those who want longer
support help make it happen, either via their efforts or their money. The
latter
is one way to look at the enterprise distribution model. For Debian,
though, money alone won't solve the LTS problem—it will require a fair
amount of effort from those
who need it.
Comments (12 posted)
Brief items
Hi, everyone who is bringing up removing the names (especially myself).
Let It Go. Move along,
Go find something positive to do and go do that. Let the people who have
fun doing this, do it and get out of their way.
--
Stephen J Smoogen
Comments (none posted)
The NetBSD Project has
announced NetBSD 6.1.1, the first security/bugfix update of the NetBSD 6.1 release branch. It represents a selected subset of fixes deemed important for security or stability reasons.
Comments (none posted)
OSTree
2013.6 has been released. "
It’s a tool for parallel installation
and atomic upgrades of general-purpose Linux-kernel based operating
systems, and designed to integrate well with a systemd/GNU userspace. You
can use it to safely upgrade client machines over plain HTTP, and longer
term, underneath package systems." See the announcement for a long
discussion of the motivation behind this project, along with
this LWN article from August 2012.
Comments (none posted)
The Ubuntu team has released the third update to its latest Long Term
Support release. The 12.04.3 LTS release is available for Desktop, Server,
Cloud, and Core products, as well as other flavors (Kubuntu, Edubuntu,
Xubuntu, Mythbuntu, and Ubuntu Studio). "
As usual, this point
release includes many updates, and updated installation media has been
provided so that fewer updates will need to be downloaded after
installation. These include security updates and corrections for other
high-impact bugs, with a focus on maintaining stability and compatibility
with Ubuntu 12.04 LTS."
Full Story (comments: none)
Distribution News
openSUSE
The openSUSE Evergreen team has announced that openSUSE 13.1 will be the
next Evergreen release. "
This means that openSUSE 13.1 will
continue to be supplied with security updates and important bugfixes
until it has had a total life time of at least three years."
Full Story (comments: none)
Newsletters and articles of interest
Comments (none posted)
Matthew Garrett
argues for
a clearer focus for the Fedora project. "
Bluntly, if you have a
well-defined goal, people are more likely to either work towards that goal
or go and do something else. If you don't, people will just do whatever
they want. The risk of defining that goal is that you'll lose some of your
existing contributors, but the benefit is that the existing contributors
will be more likely to work together rather than heading off in several
different directions."
Comments (31 posted)
The SUSE openSUSE blog has an
article with some in-depth statistics on the reach of the distribution. It includes various numbers and graphs on downloads, installations, the use of the
Open Build Service, as well as a comparison with Fedora. "
As you can see, Fedora has more downloads than openSUSE. Looking at the users, the situation is reverse: openSUSE has quite a bit more users than Fedora according to this measurement. How is this possible? The explanation is most likely that most openSUSE users upgrade with a 'zypper dup' command to the new releases, while Fedora users tend to do a fresh installation. Note that, like everybody else, we're very much aware of the deceptive nature of statistics: there is always room for mistakes in the analysis of data. To at least provide a way to detect errors and follow the commendable example set by Fedora, here are our data analysis scripts in github."
Comments (15 posted)
Page editor: Rebecca Sobol
Development
By Nathan Willis
August 28, 2013
K-9 Mail, a popular free software email client for Android, released
version 4.4 recently, marking the application's first stable update in
nearly a year. The new version adds several important new features,
although it also includes user interface changes that some users have
found disagreeable.
Version 4.400 was released on June 26, following several weeks of
beta testing. The application's interface was updated—in some
cases, to line up with Android's new UI guidelines, such as moving the
"..." menu button from the bottom of the screen to the top. There is
also support for a pull-down-to-reload gesture, which is a feature
common to a lot of contemporary Android applications, and the
notifications have been refreshed to take advantage of Android 4.1's
revamped notification style. That style incorporates more colorful
thumbnail images from the application, in concert with less intrusive
layout and text settings. But the new notifications can also be
expanded with a touch gesture, which in K-9's case means more
new-message summary text is visible without opening the app itself.
Also noteworthy is support for a split-screen message reading
interface, where the list of messages from the currently-selected
folder remains visible while the message being read is opened on the
other half of the screen. The folder view can also display an avatar
image of the message sender (if that sender is in the contacts
database), although this feature can be turned off in the app
settings.
The new version also sports K-9's first implementation of
conversation threading, although this implementation is not without
its quirks. A message thread is marked by a number-in-a-bubble
indicator on the right-hand side of the folder view. The number, as
you would expect, shows how many messages there are in the thread.
But what you might not expect is that tapping it does not open up the
thread. Instead, it opens up a folder-like "thread view" with each
individual message listed in summary form; you must still click
through each message one at a time to read them. Although it is hard
to draw general conclusions, in my own tests I found that K-9 only
recognized threads where the messages arrived in sequence,
uninterrupted by messages with other Subject headers. That does take
some of the usability out of the thread view, but perhaps mileage
varies.
The UI refresh includes plenty of nice, but small, touches, such as
using a different background color for read and unread messages in the
folder view, plus a more easily distinguished read/unread indicator
bubble. There are also improvements in message composition for
right-to-left languages, and the app widget can now display the total
number of unread messages from the "Unified Inbox" view that
encapsulates all accounts.
Under the hood, several IMAP improvements were rolled out, too,
such as server-side search (although at the moment, server-side search
is limited to searching on the Sender and Subject fields), and support
for the "forwarded" message flag. Previous releases also had bugs in
detection of the spam folder as an IMAP SPECIAL-USE folder,
which have now been fixed. This allows multiple email clients to
access the same spam folder even if it is named "Spam" in one client
and "Junk" in the other. There are also several improvements in area
of user notification: the user is now notified on SSL certification
validation failures, when a new K-9 will trigger an update to its database, and there is a "what's new in this update" screen
accompanying every app update (and one that, admirably, uses very
little technobabble in its explanations).
Feedback
Despite the improvements, K-9 4.4 has its share of quirks, and when
it was first released there was quite a bit of hostility among users
about some of the UI changes. For example, several users complained
that the new toolbar buttons are so small that they are hard to tap on
a phone, or (similarly) that the "checkbox" button for selecting a
message was so small that it is hard to tap without opening the
message.
Others complained that the "archive" and "mark as spam" buttons
were removed from the toolbar entirely, and relegated to the "..."
menu. Along the same lines, the "empty trash" function has been moved
to the "long press" pop-up menu in the message view, which at least
one user found difficult to discover. Along the way, the k-9-mail
mailing list accumulated the usual assortment of "can't you put it
back the way it was in previous releases" complaints, mixed in with
actual bug reports.
It is difficult to know how much weight to assign to many of these
UI criticisms. Many times, a user's vitriol about one particular UI
decision boils down to that particular button being the feature that
they make the most use of—which is not a design metric that
scales to large groups. On the other hand, small UI annoyances are
magnified every time they are re-encountered, and email remains one of
the most heavily-used applications for the majority of users. It is
hard to exaggerate how many UI nuances go into making a slick email
client "feel" better than a merely mediocre one.
But it is important to note, also, that a recurring trend among
many critics of the new version of K-9 Mail is that those critics have
their devices set to automatically download and install updates.
Whatever aggravation they feel when the UI changes is surely
compounded by not having advance warning of the change, but it is hard
to muster too much sympathy when automatic-updating can be so easily
switched off.
To be sure, there are still some UI issues to be ironed
out (for instance, button size is a very real concern, given the
screen real estate of the average phone). But on the whole, K-9 does
a remarkable job of presenting email in a lightweight and
easy-to-navigate package. There are several desktop clients that
could learn a thing or two about simplifying the interface and
judicious layout from K-9. It is an even more impressive accomplishment that
K-9 Mail is one of (if not the) best email clients for Android, while
remaining a volunteer-driven free software project.
Comments (5 posted)
Brief items
today I finally achieved an important milestone in my involvement with #gnome: I tricked somebody into taking over a module maintainership.
—
Emmanuele Bassi
Developers everywhere - PHP is not the same as it was even two
years ago. Stop acting like it. The language and the community
have both changed significantly. In many ways PHP is just now
starting to come into its own. In many ways, the PHP community has
gone back in time - back to relive the days that should have been
spent building the tools and libraries that were never
built. Another Magento is not going to appear. It will not
happen. Stop worrying about it.
—
Jarrod Nettles
Comments (2 posted)
Version 1.0 of the Calibre electronic book management system has been
released. "
Lots of
new features have been added to calibre in the last year — a grid view of
book covers, a new, faster database backend, the ability to convert
Microsoft Word files, tools to make changes to ebooks without needing to do
a full conversion, full support for font embedding and subsetting, and many
more."
Comments (6 posted)
Version 1.10 of the Upstart init replacement has been released. New in this edition are upstart-local-bridge (a bridge for jobs on local socket connections), a new "reload signal" stanza that will allow jobs to specify a custom signal (rather than SIGHUP) to send to the main process, shutdown and re-exec fixes, and a new Python 3 module.
Full Story (comments: none)
Git v1.8.4 has been released. This release updates the Cygwin port as well as the helpers for MediaWiki, OS X keychain, and remote transport, and adds numerous new options for various Git commands.
Full Story (comments: none)
Newsletters and articles
Comments (none posted)
On his blog, Björn Balazs
writes about the recent effort to "reboot" the KDE
human interface guidelines (HIG). There are three major sections in the HIG (structure, behavior, and presentation) and the team has a first draft of the behavior section. "
We explicitly ask about your opinion. Please read the guidelines and make sure that the text is informative and comply with developers' requirements. The content should be both generic and comprehensive, and help to make KDE awesome. But we are also interested in support. If you are able to create nice sample UIs with Qt please contact the usability team via the kde-guidelines mailinglist."
Comments (2 posted)
Larry Garfield
introduces
some recent changes to the PHP language. "
PHP 5.5 added a
feature called generators. Generators are in some sense a greatly
abbreviated syntax for iterators, but being so simple makes them
considerably more powerful. Essentially, a generator is just a function
that returns-and-pauses rather than returns-and-ends, and what is returned
is not a value but an iterator."
Comments (52 posted)
After
releasing GNOME 3.9.90, which is the first beta of the 3.9 development branch, Matthias Clasen
reflects on what is coming in GNOME 3.10. New features include a combined system status menu, some changes to control-center, the new Maps application, and more use of "header bars". "
Our previous approach of hiding titlebars on maximized windows had the problem that there was no obvious way to close maximized windows, and the titlebars were still using up vertical space on non-maximized windows. Header bars address both of these issues, and pave the way to the Wayland future by being rendered on the client side."
Comments (107 posted)
Over at the Lanedo blog, Michael Natterer explains
recent work the company has done in conjunction with Xamarin to better
integrate GTK+ with Mac OS X. The work involves embedding Quartz
widgets into GTK+, plus properly handling input events.
"Fortunately, event handling in GTK+ and Quartz are similar:
Mouse events are delivered directly to the GdkWindow / NSView they are
happening on. Key events get dispatched to the top-level GdkWindow /
NSWindow which is in charge of forwarding them to the focus widget /
first responder."
Comments (none posted)
Barney Desmond
talks
about an rsync performance problem and its solution; the result is an
interesting overview of how rsync works. "
Most of the activity in
this MySQL data file occurs at the end, where more zeroes had been written
on the sender’s side. rsync hits this section of the file and is
calculating the rolling checksums as normal. For each checksum, it’s
referring to the hash table, hitting the all-zeroes chain, then furiously
traversing the chain skipping over unusable chunks. Things are now possibly
hundreds of times slower than normal, and the backup job has been running
for over a week with no sign of finishing any time soon; not
sustainable."
Comments (18 posted)
Page editor: Nathan Willis
Announcements
Brief items
Two biotech startups are collaborating with the Linux Foundation to work on
OpenBEL. "
OpenBEL is an open source software project that enables
users to capture, store, share and leverage life sciences content through a
knowledge engineering platform. In life sciences, data collection is not
the problem; making information interoperable and actionable has proven to
be more challenging. OpenBEL aims to address those challenges."
Full Story (comments: none)
Articles of interest
The Guardian
talks
with Mark Shuttleworth about the Ubuntu Edge campaign, which failed to
reach its $32 million goal. "
The impression we have from
conversations with manufacturers is that they are open to an alternative to
Android. And end-users don't seem emotionally attached to Android. There's
no network effect from using Android like there was with Windows in the
1990s, where if some businesses starting using Windows then others had to
follow. It's not like that on mobile. They all interoperate. Every Ubuntu
device would be additive to the whole ecosystem of devices."
Comments (42 posted)
The Ada Initiative has
posted a history of three successful conference
anti-harassment campaigns. "
We decided to chronicle the history of conference anti-harassment policies in three communities: science fiction and fantasy, skepticism and atheism, and free and open source software. The goal is to create a standard reference model of how conference anti-harassment campaigns usually work so that we can refer to it when the going gets tough. If you know what other communities went through – e.g., a phase of concerted online harassment of women leaders – then you are less likely to give up. We hope this history will help people working to end harassment in other geek communities: Wikipedia, computer security, anime and comics, computer gaming, and perhaps even academic philosophy."
Full Story (comments: 1)
Ars technica
reports
that New Zealand has banned software patents. "
One Member of Parliament who was deeply involved in the debate, Clare Curran, quoted several heads of software firms complaining about how the patenting process allowed "obvious things" to get patented and that "in general software patents are counter-productive." Curran quoted one developer as saying, "It's near impossible for software to be developed without breaching some of the hundreds of thousands of patents granted around the world for obvious work.""
Comments (20 posted)
New Books
Addison-Wesley Professional has released "Python in Practice".
Full Story (comments: none)
Calls for Presentations
CFP Deadlines: August 29, 2013 to October 28, 2013
The following listing of CFP deadlines is taken from the
LWN.net CFP Calendar.
| Deadline | Event Dates |
Event | Location |
| August 30 |
October 24 October 25 |
Xen Project Developer Summit |
Edinburgh, UK |
| August 31 |
October 26 October 27 |
T-DOSE Conference 2013 |
Eindhoven, Netherlands |
| August 31 |
September 24 September 25 |
Kernel Recipes 2013 |
Paris, France |
| September 1 |
November 18 November 21 |
2013 Linux Symposium |
Ottawa, Canada |
| September 6 |
October 4 October 5 |
Open Source Developers Conference France |
Paris, France |
| September 15 |
November 8 |
PGConf.DE 2013 |
Oberhausen, Germany |
| September 15 |
November 15 November 16 |
Linux Informationstage Oldenburg |
Oldenburg, Germany |
| September 15 |
October 3 October 4 |
PyConZA 2013 |
Cape Town, South Africa |
| September 15 |
November 22 November 24 |
Python Conference Spain 2013 |
Madrid, Spain |
| September 15 |
April 9 April 17 |
PyCon 2014 |
Montreal, Canada |
| September 15 |
February 1 February 2 |
FOSDEM 2014 |
Brussels, Belgium |
| October 1 |
November 28 |
Puppet Camp |
Munich, Germany |
If the CFP deadline for your event does not appear here, please
tell us about it.
Upcoming Events
The GNU manifesto was written 30 years ago and there will be a celebration
September 28-29 in Cambridge, Massachusetts. "
GNU's 30th will be an opportunity for free software legends and enthusiastic newcomers to tackle important challenges together, while toasting to the GNU system's success. That's why this 30th anniversary event is more than just a celebration, it's also a hackathon. This event isn't just for programmers either; there will also be opportunities to contribute in many other ways besides writing code, such as a Free Software Directory sprint, crypto workshops, and much more. Come join us in developing and documenting free software for our Web-based world, with a focus on federated publishing and communication services, as well as tools to protect privacy and anonymity. Plus, it has cake and coding; what could be better?"
Full Story (comments: none)
The
Australasian Open Source Developers
Conference will take place October 21-23 in Auckland, New Zealand. "
The Australasian Open Source Developers Conference is one of the top conferences for developers in Asia Pacific. It's primarily focused on development with open source tools and development of open source software, and is now in its 10th year."
Full Story (comments: none)
Linux.conf.au will take place January 6-10, 2014 in Perth, Western
Australia. The final selection of miniconfs has been announced.
Full Story (comments: none)
Events: August 29, 2013 to October 28, 2013
The following event listing is taken from the
LWN.net Calendar.
| Date(s) | Event | Location |
August 30 September 1 |
Pycon India 2013 |
Bangalore, India |
September 3 September 5 |
GanetiCon |
Athens, Greece |
September 6 September 8 |
State Of The Map 2013 |
Birmingham, UK |
September 6 September 8 |
Kiwi PyCon 2013 |
Auckland, New Zealand |
September 10 September 11 |
Malaysia Open Source Conference 2013 |
Kuala Lumpur, Malaysia |
September 12 September 14 |
SmartDevCon |
Katowice, Poland |
| September 13 |
CentOS Dojo and Community Day |
London, UK |
September 16 September 18 |
CloudOpen |
New Orleans, LA, USA |
September 16 September 18 |
LinuxCon North America |
New Orleans, LA, USA |
September 18 September 20 |
Linux Plumbers Conference |
New Orleans, LA, USA |
September 19 September 20 |
UEFI Plugfest |
New Orleans, LA, USA |
September 19 September 20 |
Open Source Software for Business |
Prato, Italy |
September 19 September 20 |
Linux Security Summit |
New Orleans, LA, USA |
September 20 September 22 |
PyCon UK 2013 |
Coventry, UK |
September 23 September 25 |
X Developer's Conference |
Portland, OR, USA |
September 23 September 27 |
Tcl/Tk Conference |
New Orleans, LA, USA |
September 24 September 25 |
Kernel Recipes 2013 |
Paris, France |
September 24 September 26 |
OpenNebula Conf |
Berlin, Germany |
September 25 September 27 |
LibreOffice Conference 2013 |
Milan, Italy |
September 26 September 29 |
EuroBSDcon |
St Julian's area, Malta |
September 27 September 29 |
GNU 30th anniversary |
Cambridge, MA, USA |
| September 30 |
CentOS Dojo and Community Day |
New Orleans, LA, USA |
October 3 October 4 |
PyConZA 2013 |
Cape Town, South Africa |
October 4 October 5 |
Open Source Developers Conference France |
Paris, France |
October 7 October 9 |
Qt Developer Days |
Berlin, Germany |
October 12 October 13 |
PyCon Ireland |
Dublin, Ireland |
October 14 October 19 |
PyCon.DE 2013 |
Cologne, Germany |
October 17 October 20 |
PyCon PL |
Szczyrk, Poland |
| October 19 |
Hong Kong Open Source Conference 2013 |
Hong Kong, China |
| October 19 |
Central PA Open Source Conference |
Lancaster, PA, USA |
| October 20 |
Enlightenment Developer Day 2013 |
Edinburgh, Scotland, UK |
October 21 October 23 |
Open Source Developers Conference |
Auckland, New Zealand |
October 21 October 23 |
KVM Forum |
Edinburgh, UK |
October 21 October 23 |
LinuxCon Europe 2013 |
Edinburgh, UK |
October 22 October 23 |
GStreamer Conference |
Edinburgh, UK |
October 22 October 24 |
Hack.lu 2013 |
Luxembourg, Luxembourg |
| October 23 |
TracingSummit2013 |
Edinburgh, UK |
October 23 October 24 |
Open Source Monitoring Conference |
Nuremberg, Germany |
October 23 October 25 |
Linux Kernel Summit 2013 |
Edinburgh, UK |
October 24 October 25 |
Embedded LInux Conference Europe |
Edinburgh, UK |
October 24 October 25 |
Xen Project Developer Summit |
Edinburgh, UK |
October 24 October 25 |
Automotive Linux Summit Fall 2013 |
Edinburgh, UK |
October 25 October 27 |
vBSDcon 2013 |
Herndon, Virginia, USA |
October 25 October 27 |
Blender Conference 2013 |
Amsterdam, Netherlands |
October 26 October 27 |
PostgreSQL Conference China 2013 |
Hangzhou, China |
October 26 October 27 |
T-DOSE Conference 2013 |
Eindhoven, Netherlands |
If your event does not appear here, please
tell us about it.
Page editor: Rebecca Sobol