By Nathan Willis
August 21, 2013
Version 1.4.3 of the open source desktop-publishing (DTP)
application Scribus was released in July. An x.y.z release number
from a project often denotes a trivial update, but in this case the
new release incorporates several visible new features. Changes
include updates to
the barcode generation plugin, the preflight
verifier, and typesetting features. There are also a number of
additions to the application's color palette support, including a
CMYK system which the project persuaded the owners of to release as
free software.
The release was announced on
the Scribus web site on July 31. Binary packages are available for
Debian, Ubuntu, Fedora, CentOS, openSUSE, SLED, Windows and Mac
OS X—and, for the very first time, for Haiku. The Haiku port was done by
a volunteer from the Haiku development community. Scribus has long
offered builds for "minority" operating systems, including some (like
OS/2 and eComStation) which might make one wonder if acquiring the OS
itself is more of a challenge than porting applications for it.
Typesetting
The 1.4.x series is the stable release series, but 1.4.3 follows
the pattern set by 1.4.1 and 1.4.2 in introducing a handful of new
features. Most notably, 1.4.1 introduced support for a new commercial
color palette system (an improvement that 1.4.3 duplicates in even
bigger fashion),
and 1.4.2 switched over to the Hunspell library for
spell-checking. Hunspell is cross-platform and is used by a wide
array of other applications, which simplifies the Scribus project's
job of maintaining up-to-date dictionaries for spelling and
"morphological" features (e.g., hyphenation break points).
That would be true for any application, but because Scribus is focused
on generating precision-typeset documents, the quality of the spelling
dictionary is arguably more visible—and it trickles down into
other features.
For example, two changes to Scribus's typesetting
features landed in this release, both minor, but part of the project's
ongoing work to bring advanced layout features to end users. The
first is the re-activation of the hyphenation plugin for all Linux
builds. In previous releases, the hyphenator had stopped working
for some Linux distributions; it has been fixed and as now available
to all. But enabling quality hyphenation is a simple job now that
Scribus has migrated over to Hunspell, which provides human-curated
dictionaries for hyphenation breaks in addition to spelling.
Furthermore, because Hunspell is also used by other applications,
Scribus can automatically make use of Hunspell dictionaries installed
by LibreOffice, the operating system, or any other provider.
The second change is the addition of Danish to the Short Words plugin. Short Words is an add-on that prevents Scribus from
inserting a line break after certain words (as one might guess, these
are usually short ones) when doing so would awkwardly break up a term
or phrase. The canonical example is titles—for example, "Mr. Wizard" versus "Mr.
Wizard." But the issue arises with dates, product version numbers,
brands, and plenty of other scenarios.
As is the case with the hyphenator, the Short Words plugin performs
a routine task that a user can do by hand (in Short Words's case, by
inserting a non-breaking space character). The goal is to handle the
details automatically, since the manual method becomes burdensome once
documents reach a certain size. Far more features in this vein are in
the works for Scribus; developer Cezary Grabski releases his own custom builds that incorporate
many proposed features for the main branch. Still to come, for
example, is automatic control for widows and
orphans, automatic adjustment of intra-word
spacing, and "typographic"
space adjustments for problematic characters.
Most of these typesetting features are well-implemented in TeX, but
are not implemented in the major open source GUI applications. They
constitute the sort of features that graphic design professionals
expect because proprietary software already offer them, so the effort
is a welcome one for designers using Scribus.
Color-o-rama
As is the case with advanced typesetting features, designers would
historically claim a feature gap between Scribus and proprietary DTP
tools in the arena of color palette support. On
screen, of course, all colors are displayed as a combination of red,
green, and blue output, but the
print world is considerably more convoluted. Print shops catering to
complex and high-volume jobs offer a range of inks on a variety of
paper stocks, which
is what supports the color-matching industry. Built-in
support for a new color-matching palette means that a Scribus user can
select the palette by name from a drop-down list and select colors
that are known quantities.
Using a palette is akin to
selecting a house paint color from the cards at the paint store, which is far easier than the alternatives: randomly picking out a spot
on the color-selection wheel or fiddling with the hue-saturation-value
sliders. A color picked by its on-screen RGB values may or may not
line up to something convenient in the printed swatch samples, and
there is certainly no guarantee it will look the same when printed. In that sense, color-matching palettes offer a "color by reference"
option. The designer can designate the color of an object by its
value in the color-matching system, and feel confident that the print
shop will accurately reproduce it by looking up that reference.
Scribus has had solid support for both "spot colors" (i.e.,
choosing a specific color for something like an official logo) and
CMYK for years now, but as a practical matter it still simplifies
things for a user when the color palette for his or her favorite color
matching system comes built-in. 1.4.3 adds several new palettes,
including the official palettes used by the UK, Netherlands, German,
and Canadian governments, as well as predefined palettes from
Inkscape, LaTeX, Android, Apache OpenOffice, and Creative Commons.
The biggest news on the color palette front, however, is support
for the Galaxy Gauge (GG)
series. GG is a commercial manufacturer of design tools, including
color-matching swatch books. Scribus's Christoph Schäfer convinced GG
to allow Scribus to incorporate its color palette system into the
new release, but GG also decided to go a step further and place them
under an open
license—specifically, the Open Publication License,
which allows publication and modification. Schäfer said
that GG had already shown an interest in the open source creative
graphics community, and is working on a graphic design curriculum for
school-aged children that is built around open source software.
To be sure, GG is a comparatively small player in the color
matching world, but it would be a mistake to underestimate the value
of adding GG support to Scribus just because it is not Pantone or
Roland DG. Although the general public may only be familiar with the
most popular color matching brands, design firms know them all (and,
in fact, are inundated by advertising from them regularly). All of
Scribus's color palettes are found inside of the application's resources/swatches
directory, and Schäfer said in an email that the work of persuading
color matching vendors to allow Scribus to support their products
out-of-the-box is ongoing, with the potential for several significant
additions still to come.
Addenda
Also of significance is the addition of QR Code support to
Scribus's barcode generator. This is a frequently-requested feature.
QR codes have become the de facto standard for consumer-use
barcodes, embedded in magazines, posters, flyers, and other
advertisements. Although it was possible to generate them using other
tools, at best that is an unwanted hassle, and importing an external
QR code into a Scribus document was no picnic either. One might have
to convert it to a vector format, then change the colors, add
transparency, or any number of other transformations. The better
integrated it is with Scribus, the more it will get used.
Several other new features and fixes landed in 1.4.3, including a
fix for a particularly troublesome bug that prevented rendering TeX
frames on paper larger than A4 size. The online user manual also saw
significant revision in this development cycle—which, it could
be argued, is a bigger deal for Scribus than for the average open
source application, considering how intimidating new users can find
it to be.
The development focus for the next major release (1.5.0) includes
more typesetting features; in addition to those mentioned above in
Grabski's branch, support for Asian, Indic, and Middle Eastern
languages is a high priority. So are support for footnotes and
cross-references, a rewrite of the table system, and improved import
of other document types. Schäfer noted that much of this work is
already in place but a lot of it will require extensive testing,
particularly for Microsoft
Publisher and Adobe InDesign
files. At times it seems like Scribus has more irons in the fire than
any single application should, but that is part of the DTP game: every
user has different expectations. Which makes it all the more
remarkable that Scribus has implemented as much as it has so far.
Comments (2 posted)
By Nathan Willis
August 21, 2013
SourceForge.net is the longest-running project hosting provider for
open source software. It was launched in 1999, well before BerliOS,
GitHub, Google Code, or most other surviving competitors. Over that
time span, of course, its popularity has gone up and down as free
software development methodologies changed and project leaders demanded
different tools and features. The service is now evidently interested
in offering revenue-generation opportunities to the projects it hosts,
as it recently unveiled a program that enables hosted projects to
bundle "side-loaded" applications into the binary application
installer. Not everyone is happy with the new opportunity.
The service is called DevShare, and SourceForge's Roberto Galoppini
announced
it as a beta program in early July. The goal, he said, is
"giving developers a better way to monetize their projects in a
transparent, honest and sustainable way." The details provided
in the announcement are scant, but the gist appears to be that
projects that opt in to the program will get additional bundled
software applications added to the binary installers that the projects
release. These "side-loaded" applications will not be installed
automatically when the user installs the main program, since the user
must click an "accept" or "decline" button to proceed, but the
installer does try to guide users toward accepting the side-loading
installation. The providers of the side-loaded applications are
apparently paying SourceForge for placement, and the open source
projects that opt in to the program will receive a cut of the revenue.
The DevShare program was invitation-only at the beginning, and
Galoppini's announcement invited other projects to contact
the company if they were interested in participating in the beta
round. The invitation-only and opt-in beta phases make it difficult
to say how many projects are participating in DevShare—or which ones,
specifically, although the announcement pointed to the FTP client
FileZilla as an example. It is also difficult to get a clear picture
of what the side-loaded applications currently deployed are. The
announcement says the company "spent considerable time looking
for partners we could trust and building a system that does not
detract from our core user experience," but that does not
appear to have assuaged the fears of many SourceForge users. The
commenters on the Reddit
thread about the move, for instance, were quick to label the
side-loaded offerings "adware," "bloatware," "crapware," and other
such monikers.
At least two of the side-load payload applications are known:
FileZilla includes Hotspot Shield, which is touted as an ad-supported browser security bundle (offering
vague promises of anonymity, HTTPS safety, and firewall tunneling);
other downloads are reported to include a "toolbar" for Ask.com and
related web services. The Ask.com toolbar is a familiar site in these
situations; it is also side-loaded in the JRE installer from
Oracle, as well as from numerous other software-download sites like
Download.com.
To many free software advocates, the addition of "services" that
make SourceForge resemble Download.com is grounds for ditching
SourceForge as a project hosting provider altogether. Not everyone is
so absolute, however. At InfoWorld, Simon Phipps argued
that DevShare could be implemented in a manner that respects both the
software projects involved and the users, if participation is opt-in
for the projects, the projects can control which applications are
side-loaded, installation for the user is opt-in, malware is not
permitted, and the entire operation is run with transparency.
Phipps concludes that DevShare "seems to score well"
on these points, but that is open to interpretation. For example, one
aspect of Phipps's call for transparency is that SourceForge should provide
an alternate installation option without the side-loading behavior.
But many users have complained that the FileZilla downloads disguise
the side-loading installer under a deceptive name that looks like a
vanilla download. Even if the nature
of the installer is clear once one launches the installer, the
argument goes, surely it is a bait-and-switch tactic to deliver the
installer when users think they are downloading something else.
Indeed, at the moment, clicking on the download link for
FileZilla's
FileZilla_3.7.3_win32-setup.exe
(which is listed
as a 4.8 MB binary package) instead triggers a download for
SFInstaller_SFFZ_filezilla_8992693_.exe, which is a 1 MB executable
originating from the domain apnpartners.com. For now, only Windows
downloads appear to be affected, however it is not clear whether or
not this is a decision on the part of the FileZilla project or
SourceForge, or simply a technical limitation of the team behind the
HotspotShield.
Close to two months have now elapsed since the DevShare beta
program was announced, and SourceForge has not followed up with
additional details. The company has put up a "Why am I seeing this
offer?" page that explains the program, how to opt-out of the
side-loading installation, and how to uninstall the Ask.com toolbar
(although not how to uninstall HotspotShield, for some reason).
Inquisitive users thus do have access to the appropriate information
about the nature of the side-loading installation and how to decline
it, but the page is only linked from within the installer itself.
For its part, the FileZilla project has been fairly blunt about its
participation in the program. On a forum
thread titled "Sourceforge pushing crap EXEs instead of filezilla
installer," developer Tim "botg" Kosse replied simply:
This is intentional. The installer does not install any spyware and clearly offers you a choice whether to install the offered software.
If you need an unbundled installer, you can still download it from
http://download.filezilla-project.org/
Later on in the thread, he assured upset commenters that the
project is taking a stand against the inclusion of malware and spyware
in the bundle, and indicated
that FileZilla had opted out of the Ask.com toolbar, in
favor of "only software which has at least some merit. Please
let me know should that not be the case so that this issue can be
resolved."
It would appear, then, that participating projects do get some say
in what applications are side-loaded with their installers in
DevShare, which places it more in line with Phipps's metrics for
scoring responsible side-loading programs. Nevertheless, based on the
discussion thread, FileZilla's reputation among free software
advocates has taken a hit due to the move. How big of a hit (and
whether or not it will recover) remains to be seen. As DevShare
expands from a closed beta into a wider offering for hosted projects,
if indeed it does so, SourceForge.net will no doubt weather the same
type of backlash.
Comments (19 posted)
Page editor: Jonathan Corbet
Security
By Jake Edge
August 21, 2013
There has been a great deal of fallout from the Snowden leaks so far, and
one gets the sense that there is a lot more coming. One of those
consequences was the voluntary
shutdown of the Silent Mail secure email system. That action was, to
some extent, prompted by the shutdown of the
Lavabit secure email provider, which was also "voluntary", though it
was evidently encouraged by secret US government action. The Silent Mail
shutdown spawned a discussion about verifiability, which is also a topic
we looked at back in June.
Zooko Wilcox-O'Hearn, founder and CEO of LeastAuthority.com, sent an open
letter to Phil Zimmermann and Jon Callas, two of the principals behind
Silent Circle, the company that ran
Silent Mail. Given that Silent Mail was shut down due to concerns about a
government coopting or abusing the service, Wilcox-O'Hearn asked, what
guarantees are
there for users of Silent Circle's other products: Silent Text for secure
text messaging and Silent Phone for voice and video phone calls. There is
little difference between the threats faced by all three products, he
argued:
Therefore, how are your current products any safer for your users that the
canceled Silent Mail product was? The only attacker against whom your
canceled Silent Mail product was vulnerable but against whom your current
products are safe is an attacker who would require you to backdoor your
server software but who wouldn't require you to backdoor your client
software.
Wilcox-O'Hearn went on to point out that the Hushmail
email disclosure in 2007 showed that governments can and will require
backdoors
in both client and server code. At the time of that disclosure, Zimmermann
(who is known as the creator of Pretty Good Privacy, PGP) was on the board
of advisers for Hushmail and noted
that unverified end-to-end encryption is vulnerable to just this kind of
"attack". At the time, Zimmermann said:
Just because encryption is involved, that doesn't give you a talisman
against a prosecutor. They can compel a service provider to cooperate.
That came as something of a surprise to some at the time, though perhaps it
shouldn't have. In any case, given that Silent Circle's code is open
(released under a non-commercial BSD variant license), unlike Hushmail's,
the real problem is that users cannot verify that the source and binaries
correspond, Wilcox-O'Hearn said. It is not only a problem for Silent Circle, but also
for LeastAuthority.com, which runs a service based on the Least Authority File
System (LAFS, aka Tahoe-LAFS), which is open source (GPLv2+ or
the Transitive Grace Period Public License). The open
letter was essentially an effort to highlight this verifiability problem—which affects far more companies than just Silent Circle or
LeastAuthority.com—particularly in the context of government-sponsored attacks or coercion.
Callas replied
to the open letter (both also appeared on the cryptography
mailing list), in essence agreeing with Wilcox-O'Hearn. He noted that there are a
number of theoretical results (Gödel's incompleteness theorems, the Halting
problem, and Ken Thompson's Reflections on Trusting
Trust) that make the verifiability problem hard or impossible. For a
service like Silent Circle's, some trust has to be placed with the
company:
I also stress Silent Circle is a
service, not an app. This is hard to
remember and even we are not as good at it as we need to be. The service is
there to provide its users with a secure analogue of the phone and texting
apps they're used to. The difference is that instead of having utterly no
security, they have a very high degree of it.
Moreover, our design is such to minimize the trust you need to place in
us. Our network includes ourselves as a threat, which is unusual. You're
one of the very few other people who do something similar. We have
technology and policy that makes an attack on us to be unattractive
to the
adversary. You will soon see some improvements to the service that improve
our resistance to traffic analysis.
So, Silent Circle is essentially repeating the situation with Hushmail in
that it doesn't (and really can't) provide verifiable end-to-end
encryption. The binaries it distributes or the server code it is running
could have backdoors, and users have no way to determine whether they do or
don't. The
situation with LeastAuthority.com is a little different as the design of
the system makes it impossible for a LAFS service provider to access the
unencrypted data, even if the server code is malicious. In addition, as
Wilcox-O'Hearn pointed out, the client side
binaries come from Linux distributions, who build it from source. That
doesn't mean they couldn't have backdoors, of course, but it does raise the
bar considerably.
But even verifying that a source release corresponds to a binary that was
(supposedly) built from it is a difficult problem. The Tor project has
been working on just that problem, however. As we reported in June, Mike
Perry has been tackling
the problem. In a more recent blog
post, he noted some progress with Firefox (which is of particular
interest to Tor), but also some Debian efforts toward
generating deterministic packages, where users can verify that the
source corresponds to the binaries provided by the distribution.
The problem of verifying software, particularly security-oriented software,
is difficult, but also rather important. If we are to be able to keep our
communications private in the face of extremely well-heeled adversaries, we
will need to be able to verify that our encryption is truly working end to
end. That, of course, leaves the endpoints potentially vulnerable, but
that means the adversaries—governments, criminals, script kiddies,
whoever—have to target each endpoint separately. That's a much harder job
than just coercing (or attacking) a single service provider.
Comments (6 posted)
Brief items
But, perhaps more important in this
is the revelation of the
20 million queries every single month. Or,
approximately 600,000 queries every day. How about 25,000 queries every
hour? Or 417 queries every minute? Seven queries every single second. Holy
crap, that's a lot of queries.
—
Mike
Masnick is amazed at the number of NSA database queries reported
The pattern is now clear and it's getting old. With each new revelation the
government comes out with a new story for why things are really just fine,
only to have that assertion demolished by the next revelation. It's time
for those in government who want to rebuild the trust of the American
people and others all over the world to come clean and take some actual
steps to rein in the NSA. And if they don't, the American people and the
public, adversarial courts, must force change upon it.
—
Cindy
Cohn and Mark M. Jaycox in the Electronic Frontier Foundation (EFF) blog
The state that is building such a formidable apparatus of surveillance will
do its best to prevent journalists from reporting on it. Most journalists
can see that. But I wonder how many have truly understood the absolute
threat to journalism implicit in the idea of total surveillance, when or if
it comes – and, increasingly, it looks like "when".
—
Alan
Rusbridger in
The
Guardian
But all of my books had un-downloaded and needed to be downloaded
again. The app is an inefficient downloader, almost as bad as the New
Yorker app, so I dreaded this, but clicked on the two I needed most at
once. (I checked the amount of storage used, and indeed the files
really have gone off my tablet.)
And it balked. It turns out that because I am not in a country where
Google Books is an approved enterprise (which encompasses most of the
countries on the planet), I cannot download. Local wisdom among the
wizards here speculates that the undownloading occurred when the
update noted that I was outside the US borders and so intervened.
—
Jim
O'Donnell finds out about a "feature" of Google Books (via
Boing Boing)
Comments (1 posted)
Mozilla has
announced
the FuzzDB repository as a resource for those doing web security testing.
"
The attack pattern test-case sets are categorized by platform,
language, and attack type. These are malicious and malformed inputs known
to cause information leakage and exploitation. FuzzDB contains
comprehensive lists of attack payloads known to cause issues like OS
command injection, directory listings, directory traversals, source
exposure, file upload bypass, authentication bypass, http header crlf
injections, and more."
Comments (none posted)
New vulnerabilities
cacti: SQL injection and shell escaping issues
| Package(s): | cacti |
CVE #(s): | CVE-2013-1434
CVE-2013-1435
|
| Created: | August 19, 2013 |
Updated: | August 23, 2013 |
| Description: |
Details are somewhat hazy, but the Red Hat bugzilla entry notes a fix for SQL injection and shell escaping problems (code execution?) problems. |
| Alerts: |
|
Comments (none posted)
kernel: denial of service
| Package(s): | kernel |
CVE #(s): | CVE-2013-4127
|
| Created: | August 20, 2013 |
Updated: | August 21, 2013 |
| Description: |
From the CVE entry:
Use-after-free vulnerability in the vhost_net_set_backend function in drivers/vhost/net.c in the Linux kernel through 3.10.3 allows local users to cause a denial of service (OOPS and system crash) via vectors involving powering on a virtual machine. |
| Alerts: |
|
Comments (none posted)
kernel: denial of service
| Package(s): | linux-lts-raring |
CVE #(s): | CVE-2013-4247
|
| Created: | August 20, 2013 |
Updated: | August 21, 2013 |
| Description: |
From the Ubuntu advisory:
Marcus Moeller and Ken Fallon discovered that the CIFS incorrectly built
certain paths. A local attacker with access to a CIFS partition could
exploit this to crash the system, leading to a denial of service. |
| Alerts: |
|
Comments (none posted)
kernel: multiple vulnerabilities
| Package(s): | kernel |
CVE #(s): | CVE-2013-2206
CVE-2013-2224
|
| Created: | August 21, 2013 |
Updated: | August 21, 2013 |
| Description: |
From the CVE entries:
The sctp_sf_do_5_2_4_dupcook function in net/sctp/sm_statefuns.c in the SCTP implementation in the Linux kernel before 3.8.5 does not properly handle associations during the processing of a duplicate COOKIE ECHO chunk, which allows remote attackers to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact via crafted SCTP traffic.
(CVE-2013-2206)
A certain Red Hat patch for the Linux kernel 2.6.32 on Red Hat Enterprise Linux (RHEL) 6 allows local users to cause a denial of service (invalid free operation and system crash) or possibly gain privileges via a sendmsg system call with the IP_RETOPTS option, as demonstrated by hemlock.c. NOTE: this vulnerability exists because of an incorrect fix for CVE-2012-3552.
(CVE-2013-2224) |
| Alerts: |
|
Comments (none posted)
libimobiledevice: file overwrite and device key access
| Package(s): | libimobiledevice |
CVE #(s): | CVE-2013-2142
|
| Created: | August 15, 2013 |
Updated: | August 21, 2013 |
| Description: |
From the Ubuntu advisory:
Paul Collins discovered that libimobiledevice incorrectly handled temporary
files. A local attacker could possibly use this issue to overwrite
arbitrary files and access device keys. In the default Ubuntu installation,
this issue should be mitigated by the Yama link restrictions. |
| Alerts: |
|
Comments (none posted)
libtiff: two code execution flaws
| Package(s): | libtiff |
CVE #(s): | CVE-2013-4231
CVE-2013-4232
|
| Created: | August 19, 2013 |
Updated: | August 28, 2013 |
| Description: |
From the Red Hat bugzilla entries [1, 2]:
CVE-2013-4231:
Pedro Ribeiro discovered a buffer overflow flaw in rgb2ycbcr, a tool to convert RGB color, greyscale, or bi-level TIFF images to YCbCr images, and multiple buffer overflow flaws in gif2tiff, a tool to convert GIF images to TIFF. A remote attacker could provide a specially-crafted TIFF or GIF file that, when processed by rgb2ycbcr and gif2tiff respectively, would cause the tool to crash or, potentially, execute arbitrary code with the privileges of the user running the tool.
CVE-2013-4232:
Pedro Ribeiro discovered a use-after-free flaw in the t2p_readwrite_pdf_image() function in tiff2pdf, a tool for converting a TIFF image to a PDF document. A remote attacker could provide a specially-crafted TIFF file that, when processed by tiff2pdf, would cause tiff2pdf to crash or, potentially, execute arbitrary code with the privileges of the user running tiff2pdf. |
| Alerts: |
|
Comments (none posted)
libtomcrypt: bad prime number calculation
| Package(s): | libtomcrypt |
CVE #(s): | |
| Created: | August 19, 2013 |
Updated: | August 21, 2013 |
| Description: |
The impact is unclear from the Red Hat bugzilla entry, but evidently libtomcrypt has an incorrect test for prime numbers (used to generate keys). It is not thought to have widespread impact. |
| Alerts: |
|
Comments (none posted)
php-symfony2-HttpFoundation: Request::getHost() poisoning
| Package(s): | php-symfony2-HttpFoundation |
CVE #(s): | CVE-2013-4752
|
| Created: | August 21, 2013 |
Updated: | August 21, 2013 |
| Description: |
From the Symfony advisory:
Affected versions
All 2.0.X, 2.1.X, 2.2.X, and 2.3.X versions of the HttpFoundation component are affected by this issue.
Description
As the $_SERVER['HOST'] content is an input coming from the user, it can be manipulated and cannot be trusted. In the recent months, a lot of different attacks have been discovered relying on inconsistencies between the handling of the Host header by various software (web servers, reverse proxies, web frameworks, ...). Basically, everytime the framework is generating an absolute URL (when sending an email to reset a password for instance), the host might have been manipulated by an attacker. And depending on the configuration of your web server, the Symfony Request::getHost() method might be vulnerable to some of these attacks. |
| Alerts: |
|
Comments (none posted)
php-symfony2-Validator: validation metadata serialization and loss of information
| Package(s): | php-symfony2-Validator |
CVE #(s): | CVE-2013-4751
|
| Created: | August 21, 2013 |
Updated: | August 21, 2013 |
| Description: |
From the Symfony advisory:
Affected versions
All 2.0.X, 2.1.X, 2.2.X, and 2.3.X versions of the Validator component are affected by this issue.
Description
When using the Validator component, if Symfony\\Component\\Validator\\Mapping\\Cache\\ApcCache is enabled (or any other cache implementing Symfony\\Component\\Validator\\Mapping\\Cache\\CacheInterface), some information is lost during serialization (the collectionCascaded and the collectionCascadedDeeply fields).
As a consequence, arrays or traversable objects stored in fields using the @Valid constraint are not traversed by the validator as soon as the validator configuration is loaded from the cache.
|
| Alerts: |
|
Comments (none posted)
puppet: multiple vulnerabilities
| Package(s): | puppet |
CVE #(s): | CVE-2013-4761
CVE-2013-4956
|
| Created: | August 16, 2013 |
Updated: | September 20, 2013 |
| Description: |
From the Ubuntu advisory:
It was discovered that Puppet incorrectly handled the resource_type service. A local attacker on the master could use this issue to execute arbitrary Ruby files. (CVE-2013-4761)
It was discovered that Puppet incorrectly handled permissions on the modules it installed. Modules could be installed with the permissions that existed when they were built, possibly exposing them to a local attacker. (CVE-2013-4956) |
| Alerts: |
|
Comments (none posted)
putty: code execution
| Package(s): | putty |
CVE #(s): | CVE-2011-4607
|
| Created: | August 21, 2013 |
Updated: | August 21, 2013 |
| Description: |
From the Gentoo advisory:
An attacker could entice a user to open connection to specially crafted
SSH server, possibly resulting in execution of arbitrary code with the
privileges of the process or obtain sensitive information. |
| Alerts: |
|
Comments (none posted)
python: SSL hostname check bypass
| Package(s): | python |
CVE #(s): | CVE-2013-4328
|
| Created: | August 19, 2013 |
Updated: | August 21, 2013 |
| Description: |
From the Mageia advisory:
Ryan Sleevi of the Google Chrome Security Team has discovered that Python's SSL
module doesn't handle NULL bytes inside subjectAltNames general names. This
could lead to a breach when an application uses ssl.match_hostname() to match
the hostname againt the certificate's subjectAltName's dNSName general names.
(CVE-2013-4328). |
| Alerts: |
(No alerts in the database for this vulnerability)
|
Comments (none posted)
smokeping: two XSS vulnerabilities
| Package(s): | smokeping |
CVE #(s): | CVE-2013-4158
CVE-2013-4168
|
| Created: | August 15, 2013 |
Updated: | August 21, 2013 |
| Description: |
From the Red Hat Bugzilla entries [1, 2]:
CVE-2013-4158:
The fix for CVE-2012-0790 in smokeping 2.6.7 was incomplete. The
filtering used this blacklist:
$mode =~ s/[<>&%]/./g;
The version in 2.6.9 uses the following blacklist:
my $xssBadRx = qr/[<>%&'";]/;
(', ", and ; have been added. When it is used, blacklist chars are now
turned to _ rather than . ) The 2.6.9 version prevents escaping <html
attribute="..."> via " characters.
The incomplete fix is in 2.6.7 and 2.6.8.
CVE-2013-4168: Another XSS was reported in smokeping, regarding the "start" and "end" time fields. These fields are not properly filtered. This has been fixed in upstream git. |
| Alerts: |
|
Comments (none posted)
znc: denial of service
| Package(s): | znc |
CVE #(s): | CVE-2013-2130
|
| Created: | August 19, 2013 |
Updated: | August 23, 2013 |
| Description: |
From the Red Hat bugzilla entry:
Multiple vulnerabilities were reported in ZNC which can be exploited by malicious authenticated users to cause a denial of service. These flaws are due to errors when handling the "editnetwork", "editchan", "addchan", and "delchan" page requests; they can be exploited to cause a NULL pointer dereference. These flaws only affect version 1.0. |
| Alerts: |
|
Comments (none posted)
Page editor: Jake Edge
Kernel development
Brief items
The current development kernel is 3.11-rc6,
released on August 18. Linus said:
"
It's been a fairly quiet week, and the rc's are definitely
shrinking. Which makes me happy." The end of the 3.11 development
cycle is getting closer.
Stable updates: Greg Kroah-Hartman has had a busy week, shipping 3.10.7, 3.4.58, and 3.0.91 on August 14, followed by 3.10.8, 3.4.59, and 3.0.92 on August 20. A few hours later,
he released 3.10.9 and 3.0.93 as single-patch updates to fix a
problem in 3.10.8 and 3.0.92. In an attempt to avoid a repeat of that kind
of problem, he is currently considering some
minor tweaks to the patch selection process for stable updates. In
short, all but the most urgent of patches would have to wait for roughly
one week before being shipped in a stable update.
Other stable updates released this week include 3.6.11.7 (August 19), 3.5.7.19 (August 20), and 3.5.7.20 (August 21).
Comments (none posted)
The program committee for the 2013 Kernel Summit (Edinburgh, October 23-25)
has put out a special call for proposals from hobbyist developers — those
who work on the kernel outside of a paid employment situation.
"
Since most top kernel developers are not hobbyists these days, this
is your opportunity to make up for what we're missing. As we recognize
most hobbyists don't have the resources to attend conferences, we're
offering (as part of the normal kernel summit travel fund processes) travel
reimbursement as part of being selected to attend." The timeline is
tight: proposals should be submitted by August 24.
Full Story (comments: 9)
Linux.com has a
high-level look at control groups (cgroups), focusing on the problems with the current implementation and the plans to fix them going forward. It also looks at what the systemd project is doing to support a single, unified controller hierarchy, rather than the multiple hierarchies that exist today. "
'This is partly because cgroup tends to add complexity and overhead to the existing subsystems and building and bolting something on the side is often the path of the least resistance,' said Tejun Heo, Linux kernel cgroup subsystem maintainer. 'Combined with the fact that cgroup has been exploring new areas without firm established examples to follow, this led to some questionable design choices and relatively high level of inconsistency.'"
Comments (23 posted)
The Software Freedom Conservancy has
announced
that it has helped Samsung to release a version of its exFAT filesystem
implementation under the GPL. This filesystem had previously been
unofficially released after a copy leaked out
of Samsung. "
Conservancy's primary goal, as always, was to assist
and advise toward the best possible resolution to the matter that complied
fully with the GPL. Conservancy is delighted that the correct outcome has
been reached: a legitimate, full release from Samsung of all relevant
source code under the terms of Linux's license, the GPL, version 2."
Comments (20 posted)
Kernel development news
By Jonathan Corbet
August 21, 2013
Back in 2007, the kernel developers
realized that the maintenance of the last-accessed
time for files ("atime") was a significant performance problem. Atime
updates turned every read operation into a write, slowing the I/O subsystem
significantly. The response was to add the "relatime" mount option that
reduced atime updates to the minimum frequency that did not risk breaking
applications. Since then, little thought has gone into the performance
issues associated with file timestamps.
Until now, that is. Unix-like systems actually manage three timestamps
for each file: along with atime, the system maintains the time of the last
modification of the file's contents ("mtime") and the last metadata change
("ctime"). At a first glance, maintaining these times would appear to be
less of a performance problem; updating mtime or ctime requires writing the
file's inode back to disk, but the operation that causes the time to be
updated will be causing a write to happen anyway. So, one would think, any
extra cost would be lost in the noise.
It turns out, though, that there is a situation where that is not the
case — uses where a file is written through a mapping created with
mmap(). Writable memory-mapped files are a bit of a challenge for
the operating system: the application can change any part of the file with
a simple memory reference without notifying the kernel. But the kernel
must learn about the write somehow so that it can eventually push the
modified data back to persistent storage. So, when a file is mapped for
write access and a page is brought into memory, the kernel will mark that
page (in the hardware) as being read-only. An attempt to write that page
will generate a fault, notifying the kernel that the page has been
changed. At that point, the page can be made writable so that further
writes will not generate any more faults; it can stay writable until the
kernel cleans the page by writing it back to disk. Once the page is clean,
it must be marked read-only once again.
The problem, as explained by Dave Chinner,
is this: as soon as the kernel receives the page fault and makes the page
writable, it must update the file's timestamps, and, for some filesystem
types, an associated
revision counter as well. That update is done synchronously in a filesystem
transaction as part of the process of handling the page fault and allowing
write access. So a quick operation to make a page writable turns into a
heavyweight filesystem operation, and it happens every time the application
attempts to write to a clean page. If the application writes large numbers
of pages that have been mapped into memory, the result will be a painful
slowdown. And most of that effort is wasted; the timestamp updates
overwrite each other, so only the last one will persist for any useful
period of time.
As it happens, Andy Lutomirski has an application that is affected badly by
this problem. One of his
previous attempts to address the associated performance problems —
MADV_WILLWRITE — was covered here
recently. Needless to say, he is not a big fan of the current behavior
associated with mtime and ctime updates. He also asserted that the current
behavior violates the Single Unix Specification, which states that those
times must be updated between any write to a page and either the next
msync() call or the writeback of the data in question. The
kernel, he said, does not currently implement the required behavior.
In particular,
he pointed out that the timestamp updates happen after the first
write to a given page. After that first reference, the page is
left writable and the kernel will be unaware of any subsequent modifications until
the page is written back. If the page remains in memory for a long time
(multiple seconds) before being written back — as is often the case — the
timestamp update will incorrectly reflect the time of the first write, not
the last one.
In an attempt to fix both the performance and correctness issues, Andy has
put together a patch set that changes the
way timestamp updates are handled. In the new scheme, timestamps are not
updated when a page is made writable; instead, a new flag
(AS_CMTIME) is set in the associated address_space
structure. So there is no longer a filesystem transaction that must be done when
the page is made writable. At some future time, the kernel will call the
new flush_cmtime() address space operation to tell the filesystem
that an inode's times should be updated; that call will happen in response
to a writeback operation or an msync() call. So, if thousands of
pages are dirtied before writeback happens, the timestamp updates will be
collapsed into a single transaction, speeding things considerably.
Additionally, the timestamp will reflect the time of the last update
instead of the first.
There have been some quibbles with this approach. One concern is that
there are tight requirements around the handling of timestamps and revision
numbers in filesystems that are exported via NFS. NFS clients use those
timestamps to learn when cached copies of file data have gone stale; if the
timestamp updates are deferred, there is a risk that a client could work
with stale data for some period of time. Andy claimed that, with the current scheme, the
timestamp could be wrong for a far longer period, so, he said, his patch
represents
an improvement, even if it's not perfect. David Lang suggested that perfection could be reached by
updating the timestamps in memory on the first fault but not flushing that
change to disk; Andy saw merit in the idea, but has not implemented it thus
far.
As of this writing, the responses to the patch set itself have mostly been
related
to implementation details. Andy will have a number of things to change in
the patch; it also needs filesystem implementations beyond just ext4 and a
test for the xfstests package to show that things work correctly. But the
core idea no longer seems to be controversial. Barring a change of opinion
within the community, faster write fault handling for file-backed pages
should be headed toward a mainline kernel sometime soon.
Comments (23 posted)
By Jonathan Corbet
August 21, 2013
As of this writing, the
3.11-rc6 prepatch
is out and the 3.11 development cycle appears to be slowly drawing toward a
close. That can only mean one thing: it must be about time to look at some
statistics from this cycle and see where the contributions came from. 3.11
looks like a fairly typical 3.x cycle, but, as always, there's a small
surprise or two for those who look.
Developers and companies
Just over 10,700 non-merge changesets have been pulled into the repository
(so far) for 3.11; they added over 775,000 lines of code and removed over
328,000 lines for a net growth of 447,000 lines. So this remains a rather slower cycle than 3.10, which
was well past 13,000 changesets by the -rc6 release. As might be expected,
the number of developers contributing to this release has dropped along
with the changeset count, but this kernel still reflects contributions from
1,239 developers. The most active of those developers were:
| Most active 3.11 developers |
| By changesets |
| H Hartley Sweeten | 333 | 3.1% |
| Sachin Kamat | 302 | 2.8% |
| Alex Deucher | 254 | 2.4% |
| Jingoo Han | 190 | 1.8% |
| Laurent Pinchart | 147 | 1.4% |
| Daniel Vetter | 137 | 1.3% |
| Al Viro | 131 | 1.2% |
| Hans Verkuil | 123 | 1.1% |
| Lee Jones | 112 | 1.0% |
| Xenia Ragiadakou | 100 | 0.9% |
| Wei Yongjun | 99 | 0.9% |
| Jiang Liu | 98 | 0.9% |
| Lars-Peter Clausen | 91 | 0.8% |
| Linus Walleij | 90 | 0.8% |
| Johannes Berg | 86 | 0.8% |
| Tejun Heo | 85 | 0.8% |
| Oleg Nesterov | 71 | 0.7% |
| Fabio Estevam | 70 | 0.7% |
| Tomi Valkeinen | 69 | 0.6% |
| Dan Carpenter | 66 | 0.6% |
|
| By changed lines |
| Peng Tao | 260439 | 26.9% |
| Greg Kroah-Hartman | 91973 | 9.5% |
| Alex Deucher | 55904 | 5.8% |
| Kalle Valo | 22103 | 2.3% |
| Ben Skeggs | 20282 | 2.1% |
| Eli Cohen | 15886 | 1.6% |
| Solomon Peachy | 15510 | 1.6% |
| Aaro Koskinen | 13443 | 1.4% |
| H Hartley Sweeten | 11043 | 1.1% |
| Laurent Pinchart | 8923 | 0.9% |
| Benoit Cousson | 8734 | 0.9% |
| Tomi Valkeinen | 8246 | 0.9% |
| Yuan-Hsin Chen | 8222 | 0.9% |
| Tomasz Figa | 7668 | 0.8% |
| Xenia Ragiadakou | 5136 | 0.5% |
| Johannes Berg | 5029 | 0.5% |
| Maarten Lankhorst | 4924 | 0.5% |
| Marc Zyngier | 4817 | 0.5% |
| Hans Verkuil | 4707 | 0.5% |
| Linus Walleij | 4379 | 0.5% |
|
Someday, somehow, somebody will manage to displace H. Hartley Sweeten from
the top of the by-changesets list, but that was not fated to be in the 3.11
cycle. As always, he is working on cleaning up the Comedi drivers in the
staging tree — a task that has led to the merging of almost 4,000
changesets into the kernel so far. Sachin Kamat contributed a large set
of cleanups throughout the driver tree, Alex Deucher is the primary
developer for the Radeon graphics driver, Jingoo Han, like
Sachin, did a bunch of driver cleanup work, and Laurent Pinchart did a lot of
Video4Linux and ARM architecture work.
On the "lines changed" side, Peng Tao added the Lustre filesystem to the
staging tree, while Greg Kroah-Hartman removed the unloved csr driver
from that tree. Alex's Radeon work has already been mentioned; Kalle Valo
added the ath10k wireless network driver, while Ben Skeggs continued to
improve the Nouveau graphics driver.
Almost exactly 200 employers supported work on the 3.11 kernel; the most
active of those were:
| Most active 3.11 employers |
| By changesets |
| (None) | 976 | 9.1% |
| Intel | 970 | 9.1% |
| Red Hat | 911 | 8.5% |
| Linaro | 890 | 8.3% |
| Samsung | 485 | 4.5% |
| (Unknown) | 483 | 4.5% |
| IBM | 418 | 3.9% |
| Vision Engraving Systems | 333 | 3.1% |
| Texas Instruments | 319 | 3.0% |
| SUSE | 310 | 2.9% |
| AMD | 281 | 2.6% |
| Renesas Electronics | 265 | 2.5% |
| Outreach Program for Women | 230 | 2.1% |
| Google | 224 | 2.1% |
| Freescale | 151 | 1.4% |
| Oracle | 137 | 1.3% |
| ARM | 135 | 1.3% |
| Cisco | 132 | 1.2% |
|
| By lines changed |
| (None) | 307996 | 31.9% |
| Linux Foundation | 93929 | 9.7% |
| AMD | 57745 | 6.0% |
| Red Hat | 52679 | 5.5% |
| Intel | 40868 | 4.2% |
| Texas Instruments | 28819 | 3.0% |
| Qualcomm | 26215 | 2.7% |
| Renesas Electronics | 24084 | 2.5% |
| Samsung | 23413 | 2.4% |
| Linaro | 20649 | 2.1% |
| (Unknown) | 17362 | 1.8% |
| IBM | 17337 | 1.8% |
| AbsoluteValue Systems | 16872 | 1.7% |
Nokia | 16847 | 1.7% |
| Mellanox | 16841 | 1.7% |
| Vision Engraving Systems | 12268 | 1.3% |
| Outreach Program for Women | 11499 | 1.2% |
| SUSE | 10279 | 1.1% |
|
Once again, the percentage of changes coming from volunteers (listed as
"(None)" above) appears to be
slowly falling; it is down from over 11% in 3.10. Red Hat has, for the
second time,
ceded the top non-volunteer position to Intel, but the fact that Linaro is
closing on Red Hat from below is arguably far more interesting. The
numbers also reflect the large set of contributions that came in from
applicants to the Outreach Program for
Women, which has clearly
succeeded in motivating contributions to the kernel.
Signoffs
Occasionally it is interesting to look at the Signed-off-by tags in patches
in the kernel repository. In particular, if one looks at signoffs by
developers other than the author of the patch, one gets a sense for who the
subsystem maintainers responsible for getting patches into the mainline
are. In the 3.11 cycle, the top gatekeepers were:
| Most non-author signoffs in 3.11 |
| By developer |
| Greg Kroah-Hartman | 1212 | 12.3% |
| David S. Miller | 801 | 8.1% |
| Andrew Morton | 611 | 6.2% |
| Mauro Carvalho Chehab | 371 | 3.8% |
| John W. Linville | 285 | 2.9% |
| Mark Brown | 276 | 2.8% |
| Daniel Vetter | 264 | 2.7% |
| Simon Horman | 252 | 2.6% |
| Linus Walleij | 236 | 2.4% |
| Benjamin Herrenschmidt | 172 | 1.7% |
| Kyungmin Park | 157 | 1.6% |
| James Bottomley | 143 | 1.4% |
| Ingo Molnar | 132 | 1.3% |
| Rafael J. Wysocki | 131 | 1.3% |
| Kukjin Kim | 121 | 1.2% |
| Dave Airlie | 121 | 1.2% |
| Shawn Guo | 121 | 1.2% |
| Felipe Balbi | 119 | 1.2% |
| Johannes Berg | 117 | 1.2% |
| Ralf Baechle | 110 | 1.1% |
|
| By employer |
| Red Hat | 2156 | 21.9% |
| Linux Foundation | 1249 | 12.7% |
| Intel | 904 | 9.2% |
| Google | 788 | 8.0% |
| Linaro | 759 | 7.7% |
| Samsung | 429 | 4.4% |
| (None) | 408 | 4.1% |
| IBM | 332 | 3.4% |
| Renesas Electronics | 259 | 2.6% |
| SUSE | 249 | 2.5% |
| Texas Instruments | 237 | 2.4% |
| Parallels | 143 | 1.5% |
| Wind River | 126 | 1.3% |
| (Unknown) | 124 | 1.3% |
| Wolfson Microelectronics | 114 | 1.2% |
| Broadcom | 97 | 1.0% |
| Fusion-IO | 89 | 0.9% |
| OLPC | 87 | 0.9% |
| (Consultant) | 86 | 0.9% |
| Cisco | 80 | 0.8% |
|
We first looked at signoffs for 2.6.22 in
2007. Looking now, there are many of the same names on the list — but also
quite a few changes. As is the case with other aspects of kernel
development, the changes in signoffs reflect the growing importance of the
mobile and embedded sector. The good news, as reflected in these numbers,
is that mobile and embedded developers are finding roles as subsystem
maintainers, giving them a stronger say in the direction of kernel
development going forward.
Persistence of code
Finally, it has been some time since we looked
at persistence of code over time; in particular, we examined how much
code from each development cycle remained in the 2.6.33 kernel. This
information is obtained through the laborious process of running
"git blame" on each file, looking at the commit associated
with each line, and mapping that to the release in which that commit was
merged. Doing the same thing now yields a plot that looks like this:
From this we see that the code added for 3.11 makes up a little over 4% of
the kernel as a whole; as might be expected, the percentage drops as one
looks at older releases. Still, quite a bit of code from the early
2.6.30's remains untouched to this day. Incidentally, about 19% of the
code in the kernel has not been changed since the beginning of the git era;
there are still 545 files that have not
been changed at all since the 2.6.12 development cycle.
Another way to look at things would be to see how many lines from each
cycle were in the kernel in 2.6.33 (the last time this exercise was done)
compared to what's there now. That yields:
Thus, for example, the 2.6.33 kernel had about 400,000 lines from 2.6.26;
of those, about 290,000 remain in 3.11. One other thing that stands out is
that the early 2.6.30 development cycles saw fewer changesets merged into the
mainline than, say, 3.10 did, but they added more code. Much of that code
has since been changed or removed, though. Given that much of that code
went into the staging tree, this result is not entirely surprising; the
whole point of putting code into staging is to set it up for rapid change.
Actually, "rapid change" describes just about all of the data presented
here. The kernel process continues to absorb changes at a surprising and,
seemingly, increasing rate without showing any serious signs of strain.
There is almost certainly a limit to the scalability of the current
process, but we do not appear to have found it yet.
Comments (12 posted)
By Jonathan Corbet
August 20, 2013
Some ideas take longer than others to find their way into the mainline
kernel. The network firewalling mechanism known as "nftables" would
be a case in point. Much of this work was done in 2009; despite showing
a lot of promise at the time, the work languished for years afterward.
But, now, there would appear to be a critical mass of developers working on
nftables, and we may well see it merged in the relatively near future.
A firewall works by testing a packet against a chain of one or more rules.
Any of those rules may decide that the packet is to be accepted or
rejected, or it may defer judgment for subsequent rules. Rules may include
tests that take
forms like "which TCP port is this packet destined for?", "is the source IP
address on a trusted network?", or "is this packet associated with a known,
open connection?", for example. Since the tests applied to packets are
expressed in networking terms (ports, IP addresses, etc.), the code that
implements the firewall subsystem ("netfilter") has traditionally contained
a great deal of protocol awareness. In fact, this awareness is built so
deeply into the code that it has had to be replicated four times — for
IPv4, IPv6, ARP, and Ethernet bridging — because the firewall engines are
too protocol-specific to be used in a generic manner.
That duplication of code is one of a number of shortcomings in netfilter
that have long driven a desire for a replacement. In 2009, it appeared that
such a replacement was in the works when Patrick McHardy announced his nftables project. Nftables replaces the
multiple netfilter implementations with a single packet filtering engine
built on an in-kernel virtual machine, unifying firewalling at the expense
of putting (another) bytecode interpreter into the kernel. At the time,
the reaction to the idea was mostly positive, but work stalled on nftables
just the same. Patrick committed some changes in July 2010; after that, he
made no more commits for more than two years.
Frustrations with the current firewalling code did not just go away,
though. Over time, it also became clear that a general-purpose in-kernel
packet classification engine could find uses beyond firewalls; packet
scheduling is another fairly obvious possibility. So, in October 2012,
current netfilter maintainer Pablo Neira Ayuso announced that he was resurrecting Patrick's
nftables patches with an eye toward relatively quick merging into the
mainline. Since then, development of the code has accelerated, with
nftables discussion now generating much of the traffic on the netfilter
mailing list.
Nftables as it exists today is still built on the core principles designed
by Patrick. It adds a simple virtual machine to the kernel that is able to
execute bytecode to inspect a network packet and make decisions on how that
packet should be handled. The operations implemented
by this machine are intentionally basic: it can get data from the packet
itself, look at the associated metadata (which interface the packet arrived
at, for example), and manage connection tracking data. Arithmetic,
bitwise, and comparison operators can be used to make decisions based on
that data.
The virtual machine is capable of manipulating sets of data (typically IP
addresses), allowing multiple comparison operations to be replaced with a
single set lookup. There is also a "map" type that can be used to store
packet decisions directly under a key of interest — again, usually an IP
address. So, for example, a whitelist map could hold a set of known IP
addresses, associating an "accept" verdict with each.
Replacing the current, well-tuned firewalling code with a dumb virtual
machine may seem like a step backward. As it happens, there are signs that
the virtual machine may be faster than the code it replaces, but there are
a number of other advantages independent of performance. At the top of the
list is removing all of the protocol awareness from the decision engine,
allowing a single implementation to serve everywhere a packet inspection
engine is required. The protocol awareness and associated intelligence
can, instead, be pushed out to user space.
Nftables also offers an improved user-space API that allows the atomic
replacement of one or more rules with a single netlink transaction. That
will speed up firewall changes for sites with large rulesets; it can also
help to avoid race conditions while the rule change is being executed.
The code worked reasonably well in 2009, though there were a lot of loose
ends to tie down. At the top of Pablo's list of needed improvements to
nftables when he picked up the project was a bulletproof compatibility
layer for existing netfilter-based
firewalls. A new rule compiler will take existing firewall rules and
compile them for the nftables virtual machine, allowing current firewall
setups to migrate with no changes needed. This compatibility code should
allow nftables to replace the current netfilter tables relatively quickly.
Even so, chances are that both mechanisms will have to coexist in the
kernel for years. One of the other design goals behind nftables — use of
the existing netfilter hook points, connection-tracking infrastructure, and
more — will make that coexistence relatively easy.
Since the work on nftables restarted, the repository has seen over 70
commits from a half-dozen developers; there has also been a lot of work
going into the user-space nft tool and libnftables
library. The kernel changes have added missing features (the ability to
restore saved counter values, for example), compatibility hooks allowing
existing netfilter extensions
to be used until their nftables replacements
are ready, many improvements to the rule update mechanism, IPv6 NAT
support, packet tracing support, ARP filtering support, and more. The
project appears to have picked up some momentum; it seems unlikely to fall
into another multi-year period without activity before being merged.
As to when that merge will happen...it is still too early to say. The
developers are closing in on their set of desired features, but the code has
not yet been exposed to wide review beyond the netfilter list. All that
can be said with certainty is that it appears to be getting closer and to
have the development resources needed to finish the job.
See the nftables web
page for more information. A terse but
useful HOWTO document has been posted by Eric Leblond; it is probably
required reading for anybody wanting to play with this code, but a quick,
casual
read will also answer a number of questions about what firewalling will look
like in the nftables era.
Comments (29 posted)
Patches and updates
Kernel trees
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Memory management
Networking
Architecture-specific
Security-related
Virtualization and containers
Page editor: Jonathan Corbet
Distributions
By Nathan Willis
August 21, 2013
Updates to existing packages can occasionally introduce regression
bugs, which cause considerable turmoil when they hit all of a large
distribution's users at the same time. Ubuntu quietly introduced a
new mechanism in its 13.04 release that progressively rolls out
package updates, pushing each update to a small subset of the total user base first, then steadily scaling up, rather
than publishing the update for everyone simultaneously. "Phased updates" (as they
are known) are designed to catch and
revert buggy package updates before they are propagated out to the
entire user community. On
the server side, the distribution monitors crash reports in order to
decide whether each roll out should continue or be stopped for
repair. The client-side framework has been in place since the release
of 13.04, but updates themselves only started phasing in August when
all of the server-side components were ready.
Canonical's Brian Murray wrote an
introduction to the new roll-out mechanism on his blog shortly after the system went
live. The system applies to stable release
updates (SRUs) only. SRUs are updates from the main Ubuntu
repositories that by definition are supposed to ship with a
"high degree of stability" and fix critical bugs—in
contrast, for example, to backport
updates, which can introduce new features from upstream releases and
are not supported by Canonical.
On the client end, phased updates are implemented in the
update-manager tool, which is Ubuntu's graphical update
installation application. The other methods for updating a package,
such as apt-get, are not affected by the phased update plan.
The rationale is that a user using apt-get to update a
package is expressing a conscious intent to install the new version.
update-manager, in contrast, periodically checks the Ubuntu
package repositories in the background for new updates, so it is a
passive tool.
update-manager generates a random number between zero and
one for each package, then compares it to the
Phased-Update-Percentage value published on the server for
that package. If update-manager's
generated number is less than the published percentage, then the
package will be added to the list of available updates that the user can install. Dependencies for a
package are pulled in automatically; if users are in the update group
for foo, they do not also have to "re-roll the dice" (so to
speak) and wait for libfoo-common as well.
As is probably obvious, controlling the value of
Phased-Update-Percentage throttles the speed at which an
update rolls out. Currently, whenever a new package update is
published, the Phased-Update-Percentage begins at 10%. The
update percentage is incremented by 10% every six hours if nothing
goes wrong, so a complete roll-out takes 54 hours to ramp up to 100%
availability.
Alternatively, if something does go wrong with an update, the
percentage can be dialed back all the way to zero, at which point the
update can be pulled from the repository then debugged to catch and
repair whatever regressions it introduced.
Regressions are counted based on the number of reports generated by
Ubuntu's crash reporter Apport. Apport gathers
system data for each crash (stack traces, core dumps, environment
variables, system metadata, etc.) and after getting the user's consent,
sends a report in to Launchpad. All reports are logged on the Ubuntu
error tracker; when a newly
released update triggers error reports that were not present with the
previous version of the package, the Ubuntu bug squad will pull the
update. When an update is pulled, both the package signer and the
package uploader (who may, of course, be the same person) are notified
via email.
In addition to the error tracker, the phased update process is
exposed through several other Ubuntu services. The current update
percentage is tracked on the publishing-history page for each package
(a page which was already used to publication data and status
information for each package). There is also a phased update overview
page where one can see the current status of every SRU in the
phasing process.
At the moment, the overview page only has data going back until
August 7 (two weeks ago as of press time), so naturally there are only
a handful of SRUs included. There are currently three updates at the
90% level, five at 80%, and two at 0%—indicating that they have
been pulled. Those packages are the BAMF support library for Unity
and—perhaps ironically—Apport. Ironic or not, the "Problems" column of the
overview page links to the error reports for the package in question.
For privacy reasons, the individual reports are only visible to
approved members of the bug-triaging team. In an email, Murray said
that the phased update system has caught five distinct regressions
since its launch on August 7, and that nine package updates have
progressed completely to the 100% distribution phase.
Five regressions caught may not seem like many, but in the
context of Ubuntu's large installed user base, catching them before
they are distributed to the entire community is likely to have averted
several thousand application crashes. In his blog post on phased
updates, Murray commented that the system supports some corner cases,
such as not stopping an update if the team knows that the crashes it
sees were not introduced by the update itself. He also pointed out
that the system is new, so the team is still experimenting with the
various parameters (such as the speed of roll-out itself and the
utilities used to detect regressions introduced by a package).
The other interesting dimension of the system is that the subset of
users who get access to the updated package at each phase is a random
sample. That should ensure that the error reports come from a more
statistically valid set of machines than, say, a self-selected "early
adopter" group or a set of customers paying for first access.
The notion of being the first person to test out an update may make
some users uncomfortable (at least some in the comments on Murray's
blog post suggested as much), but it is important to remember that the
updates being phased in are the SRUs, not experimental updates. SRUs are
already required to go through a testing and sign-off process, so they
should be stable; the fact that there are sometimes still errors and
regressions is simply a fact of life in the software world.
Nevertheless, Murray's post says it is possible to opt-out of the
phased update system entirely by adding a directive to
/etc/apt/apt.conf. Opting out means that
update-manager will only report updates as available when
they reach the 100% phase, by which point they should be more
error-free. Alternatively, the impatient can simply user
apt-get, and install all updates immediately.
Comments (5 posted)
August 21, 2013
This article was contributed by Bruce Byfield
When we last looked at elementary OS in April 2011, it was in
its first beta release. Based on Ubuntu 10.10, it was using GNOME 2.32 and
Docky, and customization was so limited that even the wallpaper could not be changed. Now, with the recent Luna release, much of that has changed. Today, elementary OS features its own desktop and many of its own applications, as well as a focus on both development and design. The customization options have also increased, although they still fall short compared to most desktops.
Elementary OS grew from an icon set called "elementary" that governing
council member Daniel Foré designed around 2007. But, as press lead Cassidy James tells the story: "it gained popularity, but he realized he could only do so much with icons. So he followed it up with an elementary GTK theme, which became pretty popular as well. But he wasn't done. While a well-designed GTK theme is nice, it doesn't fix the underlying user experience of an app. To do that, you need to patch the app or write a new one. So that's the direction elementary took."
Recruiting others from the Ubuntu and GNOME communities to patch Nautilus and work on first-party applications, Foré founded elementary OS as a combined development and design project. Its work was showcased first in the Jupiter release in early 2011, and, more recently in the new Luna release.
These origins have given elementary OS an emphasis on aesthetics as much as code. At least two of the project's council members have backgrounds in design. In fact, James goes so far as to say that "our entire team is led by designers rather than developers."
Many of the project's developers, he added, "are not only excellent at coding, but have an eye for design." This awareness of design is obvious in the unified, minimalist look of the desktop and basic utilities, as well as the attention to branding in everything from the installer to the web pages and the widgets on the desktop dialogs. Foré explained that "essentially, [elementary OS's] existence stems from a desire for great design."
The elementary experience
Elementary OS uses a modified version of the Ubuntu 13.04 installer. The
most noticeable changes are a bare minimum of text and a reliance instead
on icons and check boxes. These changes generally work, but at times it is
at the expense of context, to the extent that inexperienced installers
might be at a loss without the User Guide,
since no online help is included.
The Pantheon desktop and its utilities often show their influences. Its fixed panel, for instance, is reminiscent of Unity, and so are its minimalist scroll bars. Similarly, the login manager is based on LightDM, and the Files file manager on Nautilus. The overall effect is as though GNOME 2 were using OS X widgets.
So why go to the trouble of rewriting these basic applications, instead of
collecting them the way an average distribution does? Pantheon's individual
applications do contain small enhancements — for instance, the music player
includes the option to adjust the equalizer based on the genre of the
current track, and the file manager has a button for re-opening closed
tabs. However, such features are not really must-haves.
What binds the desktop and utilities is elementary OS's standardization on GTK3 and Vala. "Vala has been a great language to work with," Foré said, "seeing as it's been built specifically in conjunction with GObject while offering the same low barrier-of-entry as other modern languages like C#."
An important part of this standardization is Granite, which Foré describes as not "so much a separate framework as an expansion of GTK3. As we began to develop our apps, we realized we were using a lot of the same chunks of code over and over again. We built Granite in order to centralize this code, avoiding idiosyncrasies and ensuring that bug fixes propagate to all of our apps."
Thanks to Granite, the desktop and utilities share common features and
behaviors. The (admittedly subjective) result is that Pantheon compares
favorably to any desktop for speed. The project prominently lists "speedy"
as one of the goals of elementary OS — something that seems to have been
achieved.
Just as importantly, the desktop and utilities have a common design
theme, which includes selection using a single click across the
desktop, and no
menu or minimization button for windows. Instead, the close button is on
the far left of the title bar and the maximize button is on the far
right. Windows are minimized by dragging them to a hot corner of the
display; that corner is chosen by the user in
the Systems
Setup dialog. Similarly, instead of a taskbar or a virtual workspace
switcher, keyboard commands are used to open graphical lists at the bottom
of the screen.
When elementary OS borrows applications, it favors those built with similar design principles, such as the Yorba Foundation's Geary email client and Shotwell photo manager, or the Midori web browser. Other applications are borrowed from GNOME, including Empathy and Totem, more out of functional necessity apparently than design compatibility, "much like how Xfce is not a GNOME 2 desktop, but uses several GNOME 2 technologies," James explained.
In addition to utilities with a common look, feel, and performance, the Luna release also adds one or two configuration choices — despite the fact that its developers assume that users expect to have choices made for them.
This assumption is less irritating than you might expect — the
desktop fonts, for example, display well even at the small size at which
they appear in the panel, while the icons usually work even for a
text-oriented diehard like me. All the same, a desktop is highly personal
for those who spend 8-12 hours a day in front of one, and the loudest
complaints about elementary OS are likely to concern the lack of choices
for fonts, themes, and other customizations. Choosing the wallpaper is
unlikely to be enough.
Overall, Pantheon is an immense improvement over the GNOME desktop in elementary OS's first release, transforming it from a derivative into something original. However, the impression Pantheon gives is of a work in progress, of something more than a proof of concept but less than finished. The logical expectation would be for more native applications and more customization in the next release. Meanwhile, Pantheon seems promising, just slightly less than complete.
Looking to the next release
All the same, elementary OS succeeds well enough in development and design
that the Luna release has attracted a moderate degree of buzz. According to
Foré the new release had some 120,000 downloads in the week after
its release, while lead developer Cody Garver notes that the project has
some 40 contributors, of which 10-15 are regular committers. By any
standards, the project has become a respectable size, and appears to be growing steadily.
Project members are careful to avoid giving details about future plans. However, Foré does reveal that "we are looking into online accounts integration and ways to provide cloud services to our users." In addition, James mentions that "we've begun exploring more responsive design, client-side window decorations, and different ways for apps to interact with the shell."
In the last few years, elementary OS has evolved from a fledgling project to one of the more interesting desktop environments. Not only has it incorporated development and design as much or more as any free desktop — and with far fewer resources than many — but it has gone far further than most Linux distributions in coordinating its pieces into a coherent form. For this reason, project members prefer to refer to elementary OS as a "software platform" rather than a distribution.
That preference may seem like nothing but attitude, but it is hard to argue with that attitude when it has delivered on such ambitious plans. With Luna, elementary OS has exceeded the uncertain promise of its first release and become a project worth watching, flaws and all.
Comments (4 posted)
Brief items
Compersion, n: the feeling you get when someone else also takes good care of one of your packages.
--
Enrico
Zini
Debian OS
Twenty years, still relevant
This is what we are
The base of many
But without our great work
They could not exist
It is an honor
To be reused in that way
Be proud of your work
Enjoy the evening
Celebrate with us all night
There be poetry
--
Gerfried
Fuchs
Since I run Debian on my computers, I do not play anymore to 3D shooting games, not because of the lack of Free 3D drivers, but because developing Debian is more fun and addictive.
--
unknown
(from quotes compiled by Ana Guerrero and Francesca Ciceri)
Comments (2 posted)
An early version of the GNU Radio LiveDVD is available for testing.
"
We've been using a bootable DVD for our private on-site training
courses when our clients are in environments that do not allow using the Ettus
Research LiveUSB drive. This has been useful enough that we've decided
to make it publicly available as an ISO file download from the GNU Radio
website."
Full Story (comments: none)
Distribution News
Fedora
The August 14 meeting of the Fedora Engineering Steering Committee
revisited
the question of whether sendmail
should be in the default install. This time, though, the results were
different: FESCo decided, by a vote of five to two, to not install sendmail
by default. Discussions at the recent Flock conference, it seems, were
instrumental in changing some minds.
Full Story (comments: 21)
Newsletters and articles of interest
Comments (none posted)
TechWeek Europe has
a
survey of open-source mobile operating systems competing with Android.
"
For Intel, Tizen represents another avenue into the mobile space
where smartphones and tablets are completely dominated by ARM-based chips,
while Samsung merely wants to reduce its dependency on Android – something
that in the past has led it to dabble with Windows Phone handsets in the
past. Tizen looks like another back-up option for Samsung, but its efforts
have gained credibility since it merged its in-house OS Bada with Tizen. If
it goes with a different OS besides Android, Tizen would be it."
Comments (3 posted)
Page editor: Rebecca Sobol
Development
By Jake Edge
August 21, 2013
Being able to remotely track or delete the personal data (i.e. "wipe") from
a lost
or stolen phone can be rather
useful features. Unfortunately, most of the solutions for doing so come
with strings attached. Generally, the app provider, a random attacker, or an employer can
trigger the tracking or wiping, which is likely not what the phone owner
had in mind. CyanogenMod, an
alternative Android distribution, is taking
another approach to the problem, one that will only allow the owner
of the
device to remotely track or wipe it.
Phones and other devices are frequently misplaced, and sometimes stolen.
In the former case, tracking the phone down, either by its GPS location or
by simply ringing it, is helpful. For stolen phones, the location may be
of use to law enforcement, but being able to delete all of the personal
information stored on the device is an important, possibly even critical,
feature.
On August 19, the project announced
CyanogenMod Account, which is an optional service to provide these
features. As one might expect from a project like
CyanogenMod, all of the code is
open source, and the project is encouraging both potential users and
security researchers to scrutinize it.
The idea, as outlined in the announcement, is to preserve the privacy of
users and to protect them from the service being abused. "We cannot track
you or wipe your device. We designed the protocol in such a way that makes
it impossible for anyone but you to do that." That stands in direct
contrast to Android Device
Manager, for example, which stores location information on Google's
servers. In
addition, that code is closed, so it will be difficult for anyone to verify
what, exactly, it does. There are other Android solutions, of course, but
seemingly none that are open source—or focused on user privacy.
The new feature has not yet been added to the CyanogenMod nightlies, but
can be found
on GitHub for those interested in testing. Some more details about the
implementation, and its privacy protection, can be found in a Google+
post. There are three pieces to the puzzle: an app running on the phone,
code on the CyanogenMod servers, and a JavaScript client running
in a browser. To communicate, the browser and device set up a secure
channel, mediated by
(but not visible to) the server.
In order to set up that channel, the browser generates a public/private key
pair and
prompts the user for their password. The password is cryptographically
hashed (using an HMAC,
hash-based message authentication code)
with the public key and sent to the server, which forwards it to the
device. The device can extract the public key by reversing the hash
operation using the password (which it must also have). It then creates a
symmetric session key that it encrypts with the transmitted public key and
sends it to the browser. The server cannot decrypt this session key because
it doesn't have the private key, but the browser can, so a secure
channel is established.
At that point, the browser can request location information or ask the
device to wipe the personal data stored on it. It could also request the
device to ring, which could be handy if it is simply lost under the sofa
cushion. Other actions (e.g. remotely taking pictures) are obviously
possible as well.
So far, the main criticism of the feature is its web-based nature. A
man-in-the-middle attacker could potentially feed dodgy JavaScript to the
user's browser, which could, in turn, send password information onward.
That problem should mostly be mitigated by using HTTPS to connect to the
CyanogenMod
server—but what if that server itself has been compromised? Based on
recent events, it is not just "traditional" attackers that are being
considered here, but also shadowy government "security" agencies with
secret orders to force the project to serve malware. That is an
unfortunate failure mode for all web services these days (and, sadly,
probably long in the past as well), but the code is open, so one can at
least run
their own server—and await a (presumably unlikely) visit from said shadowy
entities.
The announcement also mentions plans for further services to be added to
CyanogenMod Account. One of those is a "Secure SMS" system that Moxie
Marlinspike is
currently working on, presumably along the lines of his TextSecure
application. In the meantime, the track and wipe feature will eventually
make its way into the nightlies and then into a release. Before too long,
CyanogenMod users will have a superior solution to the lost phone problem.
Comments (4 posted)
Brief items
"Java is found everywhere [...] even in your car" - I assume that's a threat?
—
Michael Stum
YOU WOULD HAVE TO BE SOME KIND OF LUNATIC TO USE THIS IN
PRODUCTION CODE RIGHT NOW. It is so alpha that it begins the Greek
alphabet. It is so alpha that Jean-Luc Godard is filming there. It
is so alpha that it's 64-bit RISC from the 1990s. It's so alpha
that it'll try to tell you that you belong to everyone else. It's
so alpha that when you turn it sideways, it looks like an ox. It's
so alpha that the Planck constant wobbles a little bit whenever I
run the unit tests.
—
Nick Mathewson
Comments (3 posted)
Version
1.6 of the QEMU hardware emulator is available. New features include
live migration over RDMA, a new 64-bit ARM
TCG target, support for
Mac OS X guests, and more; see
the changelog for details.
Comments (12 posted)
Tom Lechner, the comic book artist and developer behind the Laidout impositioning application, has adapted Laidout's signature paper-folding tool into an HTML5 program. The Laidout folder allows the user to fold a variety of paper sizes into booklets, pamphlets, and other bound materials, and unfold them into a flat layout with proper margin edges and page orientations automatically calculated. We first covered Laidout in 2010.
Comments (none posted)
GNOME has set up an official mirror of its entire set of source repositories on GitHub. Alberto Ruiz describes the move as "a starting point for people
wanting to have a public branch where they can publicize their work
even if they don't have a GNOME account. It should also help
maintainers keep track of the work people is doing out there with
their code." The announcement also notes that there is no plan to support pull requests from GitHub branches.
Full Story (comments: 1)
Version 1.0 of the devpi server tool is available. Devpi allows users to deploy a cache of the Python Package Index (PyPI), or to run a completely internal PyPI instance.
Full Story (comments: none)
A new stable release of the Gnu Privacy Guard (GnuPG) encryption suite is available. Version 2.0.21 introduces several changes to gpg-agent, adds support for ECDSA SSH keys, and can now be installed as a "portable" application on Windows systems.
Full Story (comments: none)
Firefox 23.0.1 has been released. This version enables mixed content blocking
along with other updates and bug fixes. The
release
notes contain additional information.
Comments (38 posted)
Newsletters and articles
Comments (none posted)
At the Canonical Design blog, Tingting Zhao has written a detailed look at defining and articulating the "tasks" that are given out as prompts in usability testing. This includes finding the correct amount of detail, as well as navigating the distinction between closed- and open-ended tasks, or "direct" and "scenario" tasks. With scenario tasks, for example, "some participants may experience uncertainty as to where to look and when they have accomplished the task. Others may be more interested in getting the test done, and therefore do not put in as much effort as what they would in reality."
Comments (none posted)
On his blog, Miguel de Icaza
touts C# (and F#) async as a superior model for doing asynchronous programming to the mechanisms offered by other languages. He notes that using callbacks for asynchronous programming turns programmers into "
glorified accountants" in much the same way goto statements did, as Edsger Dijkstra's famous "Go To Statement Considered Harmful" paper described.
"
And this is precisely where C# async (and F#) come in. Every time you put the word "await" in your program, the compiler interprets this as a point in your program where execution can be suspended while some background operation takes place. The instruction just in front of await becomes the place where execution resumes once the task has completed."
Comments (86 posted)
Continuing our recent usability theme (
GNOME usability and
Ubuntu usability), Jos Poortvliet has some
tips and lessons learned from a usability workshop that he and Björn Balazs ran at this year's
Akademy. "
The goal was to teach developers how to do 'basic usability testing at home' by guiding users through their application and watching the process. To help developers who didn't make it (and those who did but can use a reminder) I hereby share a description of the process and some tips and notes." Videos from two of the tests are shown as well.
Comments (1 posted)
Over at
The Washington Post, Timothy B. Lee
looks at the
ZMap network scanning tool that was announced (
slides [PDF]) at the
USENIX Security conference on August 16. "
In contrast, ZMap is "stateless," meaning that it sends out requests and then forgets about them. Instead of keeping a list of [outstanding] requests, ZMap cleverly encodes identifying information in outgoing packets so that it will be able to identify responses. The lower overhead of this approach allows ZMap to send out packets more than 1,000 times faster than Nmap. So while an Internet-wide scan with Nmap takes weeks, ZMap can (with a gigabit network connection) scan the entire Internet in 44 minutes." Beyond just the tool itself, Lee also looks at the results of some of the research that ZMap has facilitated in areas like HTTPS adoption, security flaw fixing, and when the internet sleeps.
Comments (9 posted)
Libre Graphics World (LGW) covers
the initial release of PrintDesign, a new vector graphics editor that
started out as a refactoring of the aging sK1 illustration program. The
preview release is a work in progress, but LGW notes the project's
"good progress for about half a year of work, especially if you
consider that some newly added features are unavailable due to the
recently started UI rewrite." The review also comments that
PrintDesign seems to be targeting desktop publishing, which makes it
distinct from the Inkscape vector editor.
Comments (none posted)
Page editor: Nathan Willis
Announcements
Brief items
Groklaw founder Pamela Jones has
announced
that the site is shutting down in response to pervasive Internet
surveillance. "
What to do? I've spent the last couple of weeks
trying to figure it out. And the conclusion I've reached is that there is
no way to continue doing Groklaw, not long term, which is incredibly
sad. But it's good to be realistic. And the simple truth is, no matter how
good the motives might be for collecting and screening everything we say to
one another, and no matter how "clean" we all are ourselves from the
standpont of the screeners, I don't know how to function in such an
atmosphere. I don't know how to do Groklaw like this." (Groklaw's
previous
shutdown was in 2011).
Comments (116 posted)
Calls for Presentations
CFP Deadlines: August 22, 2013 to October 21, 2013
The following listing of CFP deadlines is taken from the
LWN.net CFP Calendar.
| Deadline | Event Dates |
Event | Location |
| August 22 |
September 25 September 27 |
LibreOffice Conference 2013 |
Milan, Italy |
| August 30 |
October 24 October 25 |
Xen Project Developer Summit |
Edinburgh, UK |
| August 31 |
October 26 October 27 |
T-DOSE Conference 2013 |
Eindhoven, Netherlands |
| August 31 |
September 24 September 25 |
Kernel Recipes 2013 |
Paris, France |
| September 1 |
November 18 November 21 |
2013 Linux Symposium |
Ottawa, Canada |
| September 6 |
October 4 October 5 |
Open Source Developers Conference France |
Paris, France |
| September 15 |
November 8 |
PGConf.DE 2013 |
Oberhausen, Germany |
| September 15 |
November 15 November 16 |
Linux Informationstage Oldenburg |
Oldenburg, Germany |
| September 15 |
October 3 October 4 |
PyConZA 2013 |
Cape Town, South Africa |
| September 15 |
November 22 November 24 |
Python Conference Spain 2013 |
Madrid, Spain |
| September 15 |
April 9 April 17 |
PyCon 2014 |
Montreal, Canada |
| September 15 |
February 1 February 2 |
FOSDEM 2014 |
Brussels, Belgium |
| October 1 |
November 28 |
Puppet Camp |
Munich, Germany |
If the CFP deadline for your event does not appear here, please
tell us about it.
Upcoming Events
Ohio LinuxFest has announced that Jon "maddog" Hall will be a keynote
speaker at the 2013 event, to be held September 13-15 in Columbus, Ohio.
Full Story (comments: none)
There will be a "UEFI Plugfest" held concurrently with the Linux Plumbers
Conference in New Orleans on September 19 and 20. "
This
event is intended to provide the Linux community with an opportunity to
conduct interoperability testing with a variety of UEFI implementations.
Additionally, the event will feature technical presentations related to
UEFI advancements and key technology insights." Joining the UEFI
Forum at the "Adopter" level is required to attend.
Full Story (comments: 5)
The Tcl/Tk User Association confirmed that John Ousterhout will be a
Featured Speaker at the conference in New Orleans, LA from Sept 23-27,
2013. "
Ousterhout is the original developer of the Tcl and Tk programming
language, a combination of the Tool Command Language and the Tk
graphical user interface tookit (Tk). His presentation will focus on
the evolution of Tcl/Tk from its original language format created at
the University of California Berkeley to the most robust and
easy-to-learn dynamic programming language that seamlessly powers
today's applications. He is also the author of Tcl and the Tk ToolKit
(2nd Edition)."
Full Story (comments: none)
Enlightenment Developer Day will take place October 20 in Edinburgh, UK,
co-located with LinuxCon Europe. "
The day will be a full day of
presentations, panels, discussions and the
ability for developers and users to get together face-to-face, present the
state of things and where they are going, ask questions, propose ideas and
otherwise have a jolly good time."
Full Story (comments: none)
The Linux Foundation has
announced
the keynotes and program for LinuxCon Europe and CloudOpen Europe. These
events, and others, are co-located in Edinburgh, Scotland October 21-23, 2013.
Comments (none posted)
PGConf China 2013 will be held in Hangzhou, China October 26-27.
Full Story (comments: none)
There will be a Mini-DebConf/DebCamp in Cambridge, UK November 14-17,
2013. "
I'm expecting that we will end up discussing and working on
the new arm64 port and other ARM-related topics at the very least, but there's
obviously also scope for other subjects for sprint work and talks."
Full Story (comments: none)
Events: August 22, 2013 to October 21, 2013
The following event listing is taken from the
LWN.net Calendar.
| Date(s) | Event | Location |
August 22 August 25 |
GNU Hackers Meeting 2013 |
Paris, France |
August 23 August 24 |
Barcamp GR |
Grand Rapids, MI, USA |
August 24 August 25 |
Free and Open Source Software Conference |
St.Augustin, Germany |
August 30 September 1 |
Pycon India 2013 |
Bangalore, India |
September 3 September 5 |
GanetiCon |
Athens, Greece |
September 6 September 8 |
State Of The Map 2013 |
Birmingham, UK |
September 6 September 8 |
Kiwi PyCon 2013 |
Auckland, New Zealand |
September 10 September 11 |
Malaysia Open Source Conference 2013 |
Kuala Lumpur, Malaysia |
September 12 September 14 |
SmartDevCon |
Katowice, Poland |
| September 13 |
CentOS Dojo and Community Day |
London, UK |
September 16 September 18 |
CloudOpen |
New Orleans, LA, USA |
September 16 September 18 |
LinuxCon North America |
New Orleans, LA, USA |
September 18 September 20 |
Linux Plumbers Conference |
New Orleans, LA, USA |
September 19 September 20 |
UEFI Plugfest |
New Orleans, LA, USA |
September 19 September 20 |
Open Source Software for Business |
Prato, Italy |
September 19 September 20 |
Linux Security Summit |
New Orleans, LA, USA |
September 20 September 22 |
PyCon UK 2013 |
Coventry, UK |
September 23 September 25 |
X Developer's Conference |
Portland, OR, USA |
September 23 September 27 |
Tcl/Tk Conference |
New Orleans, LA, USA |
September 24 September 25 |
Kernel Recipes 2013 |
Paris, France |
September 24 September 26 |
OpenNebula Conf |
Berlin, Germany |
September 25 September 27 |
LibreOffice Conference 2013 |
Milan, Italy |
September 26 September 29 |
EuroBSDcon |
St Julian's area, Malta |
September 27 September 29 |
GNU 30th anniversary |
Cambridge, MA, USA |
| September 30 |
CentOS Dojo and Community Day |
New Orleans, LA, USA |
October 3 October 4 |
PyConZA 2013 |
Cape Town, South Africa |
October 4 October 5 |
Open Source Developers Conference France |
Paris, France |
October 7 October 9 |
Qt Developer Days |
Berlin, Germany |
October 12 October 13 |
PyCon Ireland |
Dublin, Ireland |
October 14 October 19 |
PyCon.DE 2013 |
Cologne, Germany |
October 17 October 20 |
PyCon PL |
Szczyrk, Poland |
| October 19 |
Hong Kong Open Source Conference 2013 |
Hong Kong, China |
| October 19 |
Central PA Open Source Conference |
Lancaster, PA, USA |
| October 20 |
Enlightenment Developer Day 2013 |
Edinburgh, Scotland, UK |
If your event does not appear here, please
tell us about it.
Page editor: Rebecca Sobol