By Jonathan Corbet
September 11, 2013
The mainstream news has been dominated in recent months by the revelation
of the scope of the surveillance carried out by the US National Security
Agency (NSA). This activity has troubling implications that cover just
about every aspect of modern life. But discussion of the implications for
free software has been relatively muted. Perhaps it is time to start
thinking about that aspect of the situation in a more direct way. We live
in a time of interesting challenges, but also interesting opportunities.
Some of the recent leaks have made it clear that the NSA has worked
actively to insert weaknesses into both cryptographic standards and
products sold by vendors. There is, for example, some evidence that the
NSA has inserted
weaknesses into some random-number generation standards, to the point that
the US National Institute of Standards and Technology has felt
the need to reopen the public comment period for the 800-90A/B/C random number
standards, in which there is little confidence at this point. While no
compromised commercial products have yet been named, it seems increasingly
clear that such products must exist.
It is tempting to believe that the inherent protections that come with free
software — open development processes and code review — can protect us from
this kind of attack. And to an extent, that must be true. But it behooves
us to remember just how extensively free software is used in almost every
setting from deeply embedded systems to network routers to supercomputers.
How can such a software system not be a target for those bent on increasing
the surveillance state? Given the resources available to those who would
compromise our systems, how good are our defenses?
In that context, this
warning from Poul-Henning Kamp is worth reading:
Open source projects are built on trust, and these days they are
barely conscious of national borders and largely unaffected by any
real-world politics, be it trade wars or merely cultural
differences. But that doesn't mean that real-world politics are not
acutely aware of open source projects and the potential advantage
they can give in the secret world of spycraft.
To an intelligence agency, a well-thought-out weakness can easily
be worth a cover identity and five years of salary to a top-notch
programmer. Anybody who puts in five good years on an open source
project can get away with inserting a patch that "on further
inspection might not be optimal."
Given the potential payoff from the insertion of a vulnerability into a
widely used free software project, it seems inevitable that attempts have
been made to do just that. And, it should be noted, the NSA is far from
the only agency that would have an interest in compromising free software.
There is no shortage of well-funded intelligence agencies worldwide, many
of which operate with even less oversight than the NSA does. Even if the
NSA has never caused the submission of a suspect patch to a free software
project, some other agency almost certainly has.
Some concerns about this kind of compromise have already been expressed;
see the various discussions (example)
about the use of Intel's RDRAND instruction to
add entropy to the kernel's pool of random data, for example (see also
Linus responding
to those concerns in typical Linus style). This
lengthy Google+ discussion on random-number generation is worth
reading; along with a lot of details on how that process works, it covers
other concerns — like whether the NSA has forced companies like Red Hat to
put backdoors into their Linux distributions. As people think through the
implications of all that has been going on, expect a lot more questions
to be raised about the security of our software.
Predicting an increased level of concern about security is easy; figuring
out how to respond
is rather harder. Perhaps the best advice comes from The Hitchhiker's
Guide to the Galaxy: don't panic. Beyond anything else, we need
to resist any temptation to engage in witch hunts. While it is entirely
possible that somebody — perhaps even a trusted community figure — has
deliberately inserted a vulnerability into a free software project, the
simple truth remains that most bugs are simply bugs. If developers start
to come under suspicion for having made a mistake, we could find
ourselves driving some of our best contributors out of the community,
leaving us weaker than before.
That said, we do need to start looking at our code more closely. We have a
huge attack surface — everything from the kernel to libraries to network
service daemons to
applications like web browsers — and, with no external assistance at all,
we succeed in adding far too many security bugs across that entire surface.
There is clearly a market for the location and exploitation of those bugs,
and there is quite a bit of evidence that governments are major buyers in
that market. It is time that we got better at reviewing our code and
reduced the supply of raw materials to the market for exploitable
vulnerabilities.
Much of our existing code base needs to be looked at again, and quite a bit
of it is
past due for replacement. The OpenSSL code is an obvious target, for
example; it is also widely held to be incomprehensible and unmaintainable,
making auditing it for security problems that much harder.
There are projects out there that are intended to replace OpenSSL (see Selene, for example), but the
job is not trivial. Projects like this could really use more attention,
more contributors, and more auditors.
Another challenge is the proliferation of systems running old software.
Enterprise Linux distributions are at least supported with security
updates, but old, undisclosed
vulnerabilities can persist there for a long time. Old handsets (for
values of "old" that are often less than one year) that no
longer receive updates are nearly impossible to fix. Far worse, though,
are the millions of old Linux-based routers. Those devices tend to be
deployed and forgotten about; there is usually no mechanism for
distributing updates even if the owners are aware of the need to apply
them. Even projects like OpenWRT tend
to ignore the security update problem. Given that spy agencies are
understandably interested
in attacking routers, we should really be paying more attention to the
security of this kind of system.
While many in the community have long believed that a great deal of
surveillance was going on, the current revelations have still proved to be
shocking, and they have severely undermined trust in our communications
systems. Future disclosures, including, one might predict, disclosures of
activities by agencies that are in no way allied with the US, will make the
problem even worse. The
degree of corporate collaboration in this activity is not yet understood,
but even now there is, unsurprisingly, a great deal of suspicion that
closed security-relevant products may have been compromised. There is not
a lot of reason to trust what vendors are saying (or not saying) about
their products at this point.
This setting
provides a great opportunity for free software to further establish itself
as a secure alternative. The maker of a closed product can never
effectively respond to
suspicions about that product's integrity; free software, at least, can be
inspected for vulnerabilities. But to take
advantage of this opening, and, incidentally, help to make the world a more
free place, we need to ensure that we have our own act together. And that
may well require that we find a way to become a bit more paranoid while not
wrecking the openness that makes our communities work.
Comments (77 posted)
By Jake Edge
September 11, 2013
Reverting a patch, at least one that isn't causing a bug or regression, is
often controversial. Normally, the patch has been technically vetted
before it was merged, so there is—or can be—a non-technical reason behind its
removal. That is the case with the recent reversion
of a patch
to add XMir support to the Intel video driver. As might be guessed,
rejecting support for the X compatibility layer of the Mir display server
resulted in a loud hue and cry—with conspiracy theories aplenty.
The patch adding support for XMir was merged into the xf86-video-intel
driver tree on
September 4 by maintainer Chris Wilson. That driver
is the user-space X.org code for supporting Intel GPUs; it is code that
Intel has
developed and maintains. The commit message noted
that the XMir API had likely been frozen so support for the API was being
added to
the driver. The patch consists of less than 300 lines of code, most of it
confined to a new sna_xmir.c file. Based on the commit and
message, Wilson clearly didn't see any reason not to merge the patch.
All of that changed sometime before the revert on September 7, which
also prompted the release of the 2.99.902
snapshot. In the NEWS file for the snapshot was the
following message:
We do not condone or support Canonical in the course of action they have
chosen, and will not carry XMir patches upstream.
-The Management
There are a number of possible interpretations for that statement, but,
however it was meant, it
was certain to raise the ire of Canonical and/or Mir fans—and it did.
When asked about the removal of XMir support, Wilson pointed to Intel
management for answers. I
contacted Dirk Hohndel, CTO
of the Intel Open Source Technology Center, who answered the main question
at hand: Intel's "engineering team and the senior technical people made the decision that we needed to continue to focus
our efforts on X and Wayland", he said. It was a question of focus, he
said, "adding more targets to our QA and
validations needs, having to check more environments for regressions [...]
would require us to cut somewhere else".
So removing support for XMir was requested by Intel management, but
seemingly did not sit very well with Wilson. One suspects the
NEWS file entry did not get approved, for example. But it's hard
to see that any reversion (or outright rejection) of the XMir support would
have led to a different outcome. Ubuntu has a legion of fans, who can
often be quite vocal when they believe their distribution is being treated
unfairly.
Michael Hall, a Canonical employee on the Community team, obliquely
referenced the XMir removal in a post to Google+: "You will
not make
your open source project better by pulling another open source project
down."
The argument that Hall and others make is that because Intel supports
Wayland, it is hamstringing Mir by
removing support for it,
and, in effect, helping to
keep Mir as a
single-distribution display server. "This
just strikes me as trying to win the race by tripping the competition, not
by running faster", Hall said in the comments.
But accepting any code into a codebase you maintain is a burden at some
level. Supporting a new component, like a display server, also requires a
certain amount of testing. All of those things need to be weighed before
taking on that maintenance. As Matthew Garrett put it (also in the
comments to Hall's post):
Intel commit to supporting the code that they ship, even if that would
require them to write or fix large amounts of code to keep it
working. Keeping the XMir code comes at a cost to Intel with (at present)
zero benefit to Intel. As long as XMir is a single-distribution solution,
it's unsurprising that they'd want to leave that up to the distribution.
Certainly Canonical can continue to carry the XMir patches for the Intel
video driver. It is, after all, carrying its own display server code in
addition to its Unity user interface and various other Ubuntu-specific
components. But Hall sees the "single-distribution solution" as a
self-fulfilling prophecy:
Upstream won't take patches because other distros don't use it. Other
distros don't use it because other DE's don't use it. Other DE's don't use
it because it requires upstream patches that haven't been accepted.
Upstream won't accept the patches because other distros don't use it.
Since its initial attempt—with less than
stellar results—Canonical has not really tried to make any kind of
compelling technical arguments about Mir's superiority
or why any other
distribution (or desktop environment) would want to spend time working on
it (as opposed to, say, Wayland). The whole idea is to have a display
server that serves Unity's needs
and will run on multiple form factors in a time frame that Canonical
requires. That's not much of an argument for other projects to jump
on board.
As Garrett points out, Canonical has instead chosen the route of
"winning in the market", which is going to require that it
shoulder most or all of the burden until that win becomes apparent.
Casting the rejection of XMir as an attack of some kind is not sensible,
Garrett said:
Refusing to adopt code that doesn't benefit your project in any way isn't a
hostile act, any more than Canonical's refusal to adopt code that permitted
moving the Unity launcher was a hostile act or upstream Linux's refusal to
adopt the Android wakelock code was a hostile act. In all cases the code in
question simply doesn't align with the interests of the people maintaining
the code.
Other comment threads (for example on Reddit here
and here)
followed a similar pattern. Intel focusing on Wayland and X is seen as Mir
(or Canonical) bashing, with some positing that it really was an attempt to
prop up Tizen vs. Ubuntu Touch (or some other Canonical mobile initiative).
Or that Intel believes Wayland is so badly broken it needs to stop the Mir
"momentum" any way it can. Most of that seems fairly far-fetched.
One can understand Intel's lack of interest in maintaining support for
XMir without resorting to convoluted reasons—though the size of the patch
and how self-contained it is do lead
some to wonder a bit. There is a risk
for Intel in doing so, however. As Luc Verhaegen, developer of the Lima driver for ARM Mali GPUs, pointed out
in a highly critical blog
post, Intel could actually end up harming its own interests:
By not carrying this patch, Intel forces Ubuntu users to only report bugs
to Ubuntu, which then means that only few bug reports will filter through
to the actual driver developers. At the same time, Ubuntu users cannot
simply test upstream code which contains extra debugging or potential
fixes. Even worse, if this madness continues, you can imagine Intel stating
to its customers that they refuse to fix bugs which only appear under Mir,
even though there is a very very high chance of these bugs being real
driver bugs which are just exposed by Mir.
At this point, though, Intel may well be waiting to see the "proof of the
pudding". If Canonical is successful at getting Mir onto the desktops of
lots of Intel customers in the next year or two, one suspects that any
needed changes for Mir or XMir will be cheerfully added to the Intel video
driver. For now, the company loses little, and gains some maintenance and
testing time, by waiting for it all to play out.
In the end, there is an element of a "tempest in a teapot" to the whole
affair. We are talking about 300 lines of code that, evidently, won't need
much in the way of changes in the future (since the API is frozen). Intel
is almost certainly embarrassed by how the whole thing played out, and
Ubuntu fans will undoubtedly see it as yet another diss of their
favorite distribution. But in the final analysis, the impact on Mir users
will be minimal to non-existent, at least in the short term and probably
the long as well.
Comments (155 posted)
Page editor: Jonathan Corbet
Security
By Jake Edge
September 11, 2013
A paper
presented at the Privacy Law
Scholars Conference in June asks an interesting question: what
are the implications of allowing law enforcement to use existing
vulnerabilities to wiretap the internet? In some sense, current events
have outrun the paper's focus as we now know that the NSA has been using
vulnerabilities in its quest for every last bit of internet traffic, but
there are legitimate questions raised by the paper. If, someday, the US
returns to the idea of actual oversight of domestic (at least) internet
surveillance, it will be worth considering the tradeoffs described in the
paper.
The paper starts by pointing out that critics of the Communications
Assistance for Law Enforcement Act (CALEA), which mandated wiretap-friendly
interfaces for telephony equipment, were fully justified by later events.
Those interfaces were illegally used in a number of different ways,
including wiretapping
a large number of Greek politicians in 2005.
Extending CALEA to the internet, which is something the FBI has been
advocating, will predictably lead to similar abuses, so it is worthwhile to
look at alternatives.
The authors, Steven M. Bellovin, Matt Blaze, Sandy Clark, and Susan Landau,
instead propose that the FBI be authorized to use existing
vulnerabilities for wiretapping. Rather than requiring vendors to insert
vulnerabilities into their code so that the FBI can wiretap voice-over-IP
(VoIP) and other communications, just recognize that there are already
vulnerabilities available that allow the required access. But, there are a
number of consequences—along with ethical questions—that stem from allowing
that behavior.
The wide-ranging paper covers a lot of ground. Some of the more
interesting technical discussion has to do with vulnerabilities
themselves. The authors' argument, essentially, is that there will always be
vulnerabilities available that will allow the capabilities needed by law
enforcement. It is simply a matter of finding or obtaining them, then
using them against the target for whom a warrant has been issued. Even if
a CALEA-style law were passed for internet communications, they argue,
there would still be a need for vulnerability-based wiretapping. There is
existing software that doesn't implement the interfaces and targets may be
using end-to-end encryption, for example.
But in order to gain access to the "right" vulnerabilities for the target
(which would need to be determined by some kind of "technical
reconnaissance"), the FBI would need to access the vulnerability "black
market". Since the goal of wiretapping is different than that of typical
attackers, any exploit would likely need to be modified to have a
"wiretapping payload" rather than the usual spambot, remote-access, or
credential-stealing payloads. There is, in short, quite a bit of work that
would need to be done before bits of VoIP data start flowing to the cops.
From what we know now, it would be far easier to just ask the NSA.
But, assuming the NSA option closes down at some point, the ethical
dilemmas surrounding this whole idea still pose some significant hurdles.
For example, if the FBI knows about a highly useful vulnerability that is
also being exploited by botnet herders or other criminals, will it report
the hole? Or if a company is about to release an update that closes a hole
being actively used, will pressure be applied to delay (or subvert) the
release? How does the FBI ensure that its wiretapping tools aren't
disseminated to the underworld? There are, of course, plenty more
questions beyond just those.
Overall, it is an interesting quandary. On the one hand, routing around a
"CALEA for the internet" is certainly attractive. The harm to both
innovation and privacy that could be caused by such legislation is huge.
On the other hand, though, turning the FBI and other law enforcement
organizations into players on the malware stage has its own set of
dangers. The authors conclude that those dangers (or "uncomfortable
issues" as they call them) are less of a concern than the legislative
solution. Unfortunately for all of us, legislators and law enforcement
rarely grasp the idea that there might be solutions outside of new laws.
In fact, the NSA revelations may have shown an entirely different way to
operate without any new laws.
Comments (12 posted)
Brief items
In other circumstances I also found situations where NSA employees
explicitly lied to standards committees, such as that for cellphone
encryption, telling them that if they merely debated an
actually-secure protocol, they would be violating the export control
laws unless they excluded all foreigners from the room (in an
international standards committee!). The resulting paralysis is how
we ended up with encryption designed by a clueless Motorola employee
-- and kept secret for years, again due to bad NSA export control
advice, in order to hide its obvious flaws -- that basically XOR'd
each voice packet with the same bit string!
—
John
Gilmore
So, in pointing to implementation vulnerabilities as the most likely
possibility for an NSA "breakthrough," I might have actually erred a bit
too far on the side of technological interestingness. It seems that a
large part of what the NSA has been doing has simply been strong-arming
Internet companies and standards bodies into giving it backdoors. To put
it bluntly: sure, if it wants to, the NSA can probably read your email.
But that isn't mathematical cryptography's fault—any more than it
would be
mathematical crypto's fault if goons broke into your house and carted away
your laptop. On the contrary, properly-implemented, backdoor-less strong
crypto is something that apparently scares the NSA enough that they go to
some lengths to keep it from being widely used.
—
Scott Aaronson
Government and industry have betrayed the internet, and us.
By subverting the internet at every level to make it a vast, multi-layered and robust surveillance platform, the NSA has undermined a fundamental social contract. The companies that build and manage our internet infrastructure, the companies that create and sell us our hardware and software, or the companies that host our data: we can no longer trust them to be ethical internet stewards.
This is not the internet the world needs, or the internet its creators envisioned. We need to take it back.
—
Bruce
Schneier issues a call to action
[Wickr's Nico] Sell has yet to receive a secret order, so she can legally report in each transparency report: "Wickr has received zero secret orders from law enforcement and spy agencies. Watch closely for this notice to disappear." When the day came that her service had been served by the NSA, she could provide an alert to attentive users (and, more realistically, journalists) who would spread the word. Wickr is designed so that it knows nothing about its users' communications, so an NSA order would presumably leave its utility intact, but notice that the service had been subjected to an order would be a useful signal to users of other, related services.
—
Cory
Doctorow suggests a "dead man's switch"
Comments (19 posted)
On his blog, Kurt Roeckx
rounds up the current state of encryption, especially as it relates to Secure Sockets Layer/Transport Layer Security (SSL/TLS). He looks at key lengths, techniques (like Diffie-Hellman for perfect forward security), ciphers, random numbers, and existing software, showing where the likely vulnerabilities lie. "
A lot of the algorithms depend on good random numbers. That is that the attacker can't guess what a (likely) random number you've selected. There have been many cases of bad RNG [random number generator] that then resulted in things getting broken. It's hard to tell from the output of most random number generators that they are secure or not.
One important thing is that the RNGs gets seeded with random information (entropy) to begin with. If it gets no random information, very limited amount of possible inputs or information that is guessable as input it can appear to give random numbers, but they end up being predictable There have been many cases where this was broken."
Comments (21 posted)
New vulnerabilities
exactimage: denial of service
| Package(s): | exactimage |
CVE #(s): | CVE-2013-1441
|
| Created: | September 11, 2013 |
Updated: | September 11, 2013 |
| Description: |
From the Debian advisory:
It was discovered that exactimage, a fast image processing library,
does not correctly handle error conditions of the embedded copy of
dcraw. This could result in a crash or other behaviour in an
application using the library due to an uninitialized variable being
passed to longjmp. |
| Alerts: |
|
Comments (none posted)
fedora-business-cards: insecure temporary file usage
| Package(s): | fedora-business-cards |
CVE #(s): | CVE-2013-0159
|
| Created: | September 10, 2013 |
Updated: | September 11, 2013 |
| Description: |
From the Red Hat bugzilla:
Michael Scherer reported that the fedora-business-cards script used /tmp/fedora-business-cards-buffer.svg as a temporary file, which could be used in symlink attacks to overwrite the contents of a file with write permissions to the person running fedora-business-cards. |
| Alerts: |
|
Comments (none posted)
gdm: privilege escalation
| Package(s): | gdm |
CVE #(s): | CVE-2013-4169
|
| Created: | September 6, 2013 |
Updated: | September 11, 2013 |
| Description: |
From the Red Hat advisory:
A race condition was found in the way GDM handled the X server sockets
directory located in the system temporary directory. An unprivileged user
could use this flaw to perform a symbolic link attack, giving them write
access to any file, allowing them to escalate their privileges to root.
(CVE-2013-4169)
Note that this erratum includes an updated initscripts package. To fix
CVE-2013-4169, the vulnerable code was removed from GDM and the initscripts
package was modified to create the affected directory safely during the
system boot process. Therefore, this update will appear on all systems,
however systems without GDM installed are not affected by this flaw. |
| Alerts: |
|
Comments (none posted)
kernel: code execution
| Package(s): | EC2 kernel |
CVE #(s): | CVE-2013-1060
|
| Created: | September 6, 2013 |
Updated: | September 11, 2013 |
| Description: |
From the Ubuntu advisory:
Vasily Kulikov discovered a flaw in the Linux Kernel's perf tool that allows for privilege escalation. A local user could exploit this flaw to run commands as root when using the perf tool. |
| Alerts: |
|
Comments (none posted)
kernel: denial of service
| Package(s): | kernel |
CVE #(s): | CVE-2012-5375
|
| Created: | September 6, 2013 |
Updated: | September 11, 2013 |
| Description: |
From the Ubuntu advisory:
A denial of service flaw was discovered in the Btrfs file system in the Linux kernel. A local user could cause a denial of service (prevent file creation) for a victim, by creating a file with a specific CRC32C hash value in a directory important to the victim. |
| Alerts: |
|
Comments (none posted)
LibRaw: denial of service
| Package(s): | LibRaw |
CVE #(s): | CVE-2013-1439
|
| Created: | September 10, 2013 |
Updated: | September 11, 2013 |
| Description: |
From the Fedora advisory:
Specially crafted photo files may trigger a series of conditions in
which a null pointer is dereferenced leading to denial of service in
applications using the library. These three vulnerabilities are
in/related to the 'faster LJPEG decoder', which upstream states was
introduced in LibRaw 0.13 and support for which is going to be dropped
in 0.16. |
| Alerts: |
|
Comments (none posted)
phpbb3: file overwrites
| Package(s): | phpbb3 |
CVE #(s): | |
| Created: | September 9, 2013 |
Updated: | September 11, 2013 |
| Description: |
From the Debian advisory:
Andreas Beckmann discovered that phpBB, a web forum, as installed in
Debian, sets incorrect permissions for cached files, allowing a
malicious local user to overwrite them. |
| Alerts: |
|
Comments (none posted)
python-django: directory traversal
| Package(s): | python-django |
CVE #(s): | CVE-2013-4315
|
| Created: | September 11, 2013 |
Updated: | September 19, 2013 |
| Description: |
From the Debian advisory:
Rainer Koirikivi discovered a directory traversal vulnerability with
'ssi' template tags in python-django, a high-level Python web
development framework.
It was shown that the handling of the 'ALLOWED_INCLUDE_ROOTS' setting,
used to represent allowed prefixes for the {% ssi %} template tag, is
vulnerable to a directory traversal attack, by specifying a file path
which begins as the absolute path of a directory in
'ALLOWED_INCLUDE_ROOTS', and then uses relative paths to break free.
To exploit this vulnerability an attacker must be in a position to alter
templates on the site, or the site to be attacked must have one or more
templates making use of the 'ssi' tag, and must allow some form of
unsanitized user input to be used as an argument to the 'ssi' tag. |
| Alerts: |
|
Comments (none posted)
subversion: privilege escalation
| Package(s): | subversion |
CVE #(s): | CVE-2013-4277
|
| Created: | September 9, 2013 |
Updated: | September 25, 2013 |
| Description: |
From the Fedora advisory:
svnserve takes a --pid-file option which creates a file containing the process id it is running as.
It does not take steps to ensure that the file it has been directed at is not a symlink. If the
pid file is in a directory writeable by unprivileged users, the destination could be replaced by a
symlink allowing for privilege escalation. svnserve does not create a pid file by default. |
| Alerts: |
|
Comments (none posted)
Page editor: Jake Edge
Kernel development
Brief items
The 3.12 merge window is still open, so there is no development
kernel as of this writing.
Stable updates:
3.10.11, 3.4.61, and 3.0.95 were all released on September 7;
3.2.51 came out on September 11.
Comments (none posted)
Dropping the spinlocks means more cores; unfortunately, a quad-core
seems to be the limit. Users must divide their time between reading
history and contributing to the present: some amount of persistent
data is a must on every user's machine. Pixel seems to be heading
in the wrong direction: that's what is stressing us out.
— Somebody seems to have unleashed
a robot
on linux-kernel.
Let's see if I can remember the candidates...
rcu_is_cpu_idle() # reversed sense from the others
rcu_is_ignored() # reversed sense from the others
rcu_is_not_active() # reversed sense from the others
rcu_is_watching_cpu()
rcu_read_check()
rcu_is_active()
rcu_is_active_local()
rcu_is_online()
rcu_is_watching_task()
rcu_is_watching_thread()
rcu_is_watching_you()
all_your_base_are_belong_to_rcu()
rcu_is_active_loco()
rcu_kilroy_was_here()
Maybe I should just lock them all in a room overnight and see which
are still alive in the morning.
—
Paul McKenney struggles with naming
Comments (1 posted)
Kernel development news
By Jonathan Corbet
September 11, 2013
As of this writing, nearly 8,500 non-merge changesets have been pulled into
the mainline repository for the 3.12 development cycle; almost 5,000 of
those have been pulled since
last week's
summary. The process was
slowed
somewhat when Linus's primary disk drive failed, but not even hardware
failure can stop the kernel process for long.
This development cycle continues to feature a large range of internal
improvements and relatively few exciting new features. Some of the
user-visible changes that have been merged include:
- The direct rendering graphics layer has gained the concept of "render
nodes," which separate the rendering of graphics from modesetting and
other display control; the "big three" graphics drivers all support
this concept. See this
post from David Herrmann for more information on where this work
is going.
- The netfilter subsystem supports a new "SYNPROXY" target that
simulates connection establishment on one side of the firewall before
actually establishing the connection on the other. It can be thought
of as a way of implementing SYN cookies at the perimeter, preventing
spurious connection attempts from traversing the firewall.
- The TSO sizing patches and FQ
scheduler have been merged. TSO sizing helps to eliminate bursty
traffic when TCP segmentation offload is being used, while FQ provides
a simple fair-queuing discipline for traffic transiting through the
system.
- The ext3 filesystem has a new journal_path= mount option that
allows the specification of an external journal's location using a
device path name.
- The Tile architecture has gained support for ftrace, kprobes, and full
kernel preemption. Also, support for the old TILE64 CPU has been
removed.
- The xfs filesystem is finally able to support user namespaces. The
addition of this support should make it easier for distributors to
enable the user namespace feature, should they feel at ease with the
security implications of such a move.
- Mainline support for ARM "big.LITTLE" systems is getting closer; 3.12
will include a new cpuidle driver that builds on the multi-cluster power management patches to
provide CPU idle support on big.LITTLE systems.
- The MD RAID5 implementation is now multithreaded, increasing its
maximum I/O rates when dealing with fast drives.
- The device mapper has a new statistics module that can track I/O
activity over a range of blocks on a DM device. See Documentation/device-mapper/statistics.txt
for details.
- The device tree code now feeds the entire flattened device tree text
into the random number pool in an attempt to increase the amount of
entropy available at early boot. It is not clear at this point how
much benefit is gained, since device trees are mostly or entirely
identical for a given class of device. It is possible for a device
tree to hold unique data — network MAC addresses, for example — but
that is not guaranteed, and some developers think that entropy would
be better served by just feeding the unique data directly.
- New hardware support includes:
- Systems and processors:
Freescale P1023 RDB and C293PCIE boards.
- Graphics:
Qualcomm MSM/Snapdragon GPUs.
The nouveau graphics driver has also gained proper power
management support, and the power management support for Radeon
devices has been improved and extended to a wider range of chips.
- Miscellaneous:
GPIO-controlled backlights,
Sanyo LV5207LP backlight controllers,
Rohm BD6107 backlight controllers,
IdeaPad laptop slidebars,
Toumaz Xenif TZ1090 GPIO controllers,
Kontron ETX/COMexpress GPIO controllers,
Fintek F71882FG and F71889F GPIO controllers,
Dialog Semiconductor DA9063 PMICs,
Samsung S2MPS11 crystal oscillator clocks,
Hisilicon K3 DMA controllers,
Renesas R-Car HPB DMA controllers, and
TI BQ24190 and TWL4030 battery charger controllers.
- Networking:
MOXA ART (RTL8201CP) Ethernet interfaces,
Solarflare SFC9100 interfaces, and
CoreChip-sz SR9700-based Ethernet devices.
- Video4Linux:
Renesas VSP1 video processing engines,
Renesas R-Car video input devices,
Mirics MSi3101 software-defined radio dongles (the first SDR
device supported by the mainline kernel),
Syntek STK1135 USB cameras,
Analog Devices ADV7842 video decoders, and
Analog Devices ADV7511 video encoders.
Changes visible to kernel developers include:
- The GEM and TTM memory managers within the graphics subsystem are now
using a unified subsystem for the management of virtual memory areas,
eliminating some duplicated functionality.
- The new lockref mechanism can now mark
a reference-counted item as being "dead." The separate state is
needed because lockrefs can be used in places (like the dentry cache)
where an item can have a reference count of zero and still be alive
and usable. Once the structure has been marked as dead, though, the
reference count cannot be incremented and the structure cannot be used.
The closing of the merge window still looks to happen on September 15, or,
perhaps, one day later to allow Linus to get back up to speed after his
planned weekend diving experience.
Comments (7 posted)
By Jake Edge
September 11, 2013
The reporting and handling of security issues is a tricky proposition.
There are numerous competing interests to try to balance, and a general
tendency toward secrecy that can complicate things further. Thus it is not
surprising that kernel developers are discussing security handling on the
Kernel
Summit discussion mailing list (ksummit-2013-discuss).
It seems likely that discussion will pick up again at the summit itself,
which will be held in Edinburgh, October 23-25.
James Bottomley kicked off the discussion by noting
that several recent fixes had gone into the kernel without following the
normal process because they were "security fixes". Given that some of
those fixes caused problems of
various sorts, he is concerned about circumventing the process simply
because the patches fix security issues:
In both cases we had commits with cryptic messages, little explanation
and practically no review all in the name of security.
Our core processes for accepting code require transparency, review and
testing. Secrecy in getting code into the kernel is therefore
fundamentally breaking this and risking the kinds of problems we see in
each of the instances.
Bottomley would like to explore whether security vulnerabilities need to be
handled in secret at all. Given that he thinks that may not be
popular, looking into what can be done to inject more transparency into the
process would be a reasonable alternative.
Part of his theory is that "security people" who "love
secrecy" are running the vulnerability-handling process.
For example, the closed kernel security mailing list (security@kernel.org)
is either made up of "security officers" (according to
Documentation/SecurityBugs) or "'normal' kernel
developers" (according
to Greg Kroah-Hartman). There is no inherent interest in secrecy by
the participants on that list,
Kroah-Hartman said, though he did agree that posting a list of the members
of security@kernel.org—which has not yet happened—would help to make things
more transparent. The relationship
between the kernel security list and the linux-distros mailing list (a
closed list
for distribution security concerns—the successor to vendor-sec) is also a
bit murky, which could use some clearing up, Bottomley said.
A big part of the problem is that there are a few different constituencies to
try to satisfy, including
distributions (some of which, like enterprise distributions, may have
additional needs or wants), users (most of whom get their kernel from a
distributor or device maker), security researchers (who sometimes like to
make a big splash with their findings), and so on. While it might be tempting
to dismiss the security researchers as perpetrators of what Linus Torvalds
likes to call "the security circus", it is important to include them. They
are often the ones who find vulnerabilities; annoying them often results in
them failing to report what they find, sadly.
Secrecy in vulnerability handling may be important to the enterprise
distributions for other reasons, as Stephen Hemminger said.
Security vulnerabilities and response time are often used as a "sales" tool
in those markets, so that may lead to a push for more secrecy:
It seems to me that the secrecy is more about avoiding sensationalist
news reports that might provide FUD to competitors.
For the enterprise products this kind of FUD might impact buying
decisions and even the financial markets.
Torvalds's practice of hiding
the security implications of patches also plays a role here. He wants to
mask vulnerabilities so that "black hats" cannot easily grep
them from commit logs, but as James Morris pointed
out, that's not really effective: "The cryptic / silent fixes are
really only helping the bad guys. They are watching these commits and
doing security analysis on them."
It seems unlikely (though perhaps not completely impossible) that Torvalds would
change his mind on the issue, so various ideas on collecting known
security information correlated with the commit(s) that fixed them were
batted around. Clearly, some information about security implications only
comes to light after the
commit has been made—sometimes long after—so there is a need to collect it
separately in any case.
Kees Cook described
some of the information that could be collected, while Andy Lutomirski expanded
on the idea by suggesting separate CVE files stored in the kernel tree.
The idea
seemed fairly popular; others
chimed in with suggestions for collaborating with Debian and/or the
linux-distros mailing
list participants.
In a separate sub-thread, Lutomirski created
a template for how the information could be stored. Cook concurred
and suggested that the files could live under Documentation/CVEs
or something similar. It is clear that there is an interest in having more
data available on security vulnerabilities and fixes in the kernel, so
that could lead to a lively discussion in October.
Some seem to have already started down the path of more openness in the
security reporting realm.
Lutomirski recently publicly posted a fix that was
clearly marked as a security fix from the outset. Cook did much the same
with a list of vulnerabilities in the kernel's human
interface device (HID) code. Exploiting the HID bugs requires physical access and
specialized devices, but that may be part of the threat model for certain
users. These aren't the first reports of this kind;
others have been made from time to time. In fact, certain subsystems
(networking, in particular) essentially never use the closed list and
prefer to work on security problems and fixes in the open.
An even more recent example comes from Wannes Rombouts's report of a networking security hole (use
after free), which was
referred to the netdev mailing list by security@kernel.org.
The implications of the bug were not completely clear (either to Rombouts or to
Hemminger, who replied), but Ben Hutchings
recognized that user namespaces could make
the problem more widespread (when and if they are enabled in most kernels
anyway). Though it is networking related—thus the referral to netdev,
presumably—this is the kind of vulnerability that could have been handled behind
closed doors. But because it was posted to an open list, the full implications
of the problem were discovered. In addition, for this bug (as well as for
Lutomirski's and Cook's
bugs), those affected have the ability to find out about the problems and
either patch their kernels or otherwise mitigate the problem. And
that is another advantage of openness.
Comments (12 posted)
By Jonathan Corbet
September 11, 2013
Most of the hand-wringing over the UEFI secure boot mechanism has long
passed; those who want to run Linux on systems with secure boot enabled
are, for the most part, able to do so. Things are quiet enough that one
might be tempted to believe that the problem is entirely solved. As it
happens, though, the core patches that implement the lockdown that some
developers think is necessary for proper secure boot support still have not
made their way into the mainline. The developer behind that work is still
trying to get it merged though; in the process, he has brought back an old
idea that was last rejected in 1998.
By Matthew Garrett's reading of the secure boot requirements, a system
running in secure boot mode must not allow any user to change the
running kernel; not even root is empowered to do so. Just over one year
ago, Matthew posted a set of patches that
implemented the necessary restrictions. In secure boot mode (as defined by
the absence of a new capability called, at that time,
CAP_SECURE_FIRMWARE), the kernel would not allow the loading of
unsigned kernel modules, direct access to I/O ports or I/O memory, or,
most controversially, use of the kexec_load() system call to
reboot directly into a new kernel. As one might expect, not everybody
liked this type of restriction, which flies in the face of the longstanding
Unix tradition of giving root enough rope to shoot itself in the foot.
So there were discussions around various aspects of these patches, but one of
the biggest problems only came to light later. It seems that there is a
fundamental flaw in the capability model: it is nearly impossible to add
new capability bits without risking problems with applications that do not
know about the new bits. In particular:
- Some capability-aware applications work by turning off every
capability that they do not think they need. If a new bit is added
controlling functionality that such an application uses, it will
unknowingly disable a necessary capability and cease to work properly.
From the point of view of users of this application, this kind of
change constitutes an incompatible ABI change.
- Other applications work in a blacklist-oriented mode, turning off
capabilities that are known not to be needed. In essence, such an
application simply sets the capability mask to zero, then sets the
bits corresponding to the capabilities it wants. If some sort of
security-related functionality is put behind a new bit that is unknown
to this kind of application, that application will leave the
capability enabled. That, in turn, could make the application
insecure.
In this case, the biggest risk is that whitelist-style applications would
inadvertently turn off CAP_SECURE_FIRMWARE, essentially putting
themselves into secure boot mode even if the system as a whole is not
running in that mode. That could cause things to break in mysterious ways.
What it comes down to is that, if one is designing a capability-based
system, one really must come up with the full list of needed capabilities
at the outset. Back in 1998, when capabilities for Linux were being hashed
out, nobody had UEFI secure boot in mind. So there is no relevant
capability bit available, and adding one now is not really an option.
More recently, Matthew posted a new patch
set that eliminates the new capability. Instead, all of the secure
boot restrictions were tied to the existing flag controlling whether
unsigned kernel
modules can be loaded. Matthew's reasoning was that the restriction on
module loading exists to prevent the loading of arbitrary code into the
running kernel, so it made sense to lock down any other functionality that
might make it possible to evade that restriction. Other developers
disagreed, though, saying that they needed the ability to restrict module
loading while still allowing other functionality — kexec_load() in
particular — to be used normally. After some discussion, Matthew backed
down and withdrew the patches.
Eventually he came back with what he called his
final attempt at providing a kernel lockdown facility that wasn't tied
to the secure boot mechanism itself. This time around, we have a new
sysfs file at /sys/kernel/security/securelevel that accepts any of
three values. If it is set to zero (the default), everything works as it
always has, with no new restrictions. Setting it to one invokes "secure
mode," in which all of the restrictions related to secure boot go into
effect. Secure mode is also irrevocable; once it has been enabled, it
cannot be disabled (short of compromising the kernel, at which point the
battle is already lost). There is also an interesting "permanently
insecure" mode obtained by setting securelevel to -1; the
system's behavior is the same as with a setting of zero, but it is no
longer possible to change the security level.
In the UEFI secure boot setting, the bootstrap code would take pains to set
securelevel to one before allowing any processes to run. That
helps to avoid race conditions where the system is subverted before
the lockdown can be applied.
Some readers will, by now, have recognized that "securelevel" looks an
awful lot like the
BSD functionality that goes by the same name; it was clearly patterned
after BSD's version. Amusingly, this is not the first time that
securelevel has been considered for Linux; there was an extensive discussion on the
subject in early 1998, when Alan Cox was pushing strongly for a
securelevel feature. At that time, Linus rejected the feature because he
had something much better in mind: capabilities. As is usually the case,
Linus won out, and Linux got capabilities instead of securelevel.
More than fifteen years later, it seems that we might just end up with both
mechanisms. Thus far, Matthew's latest patch set has not resulted in many
screams of agony, so it might just pass review this time — though, at this
point, it is almost certainly too late for 3.12. Meanwhile, Vivek Goyal
has posted the first version of a signed kexec
patch set that would limit kexec_load() to signed images.
That would allow some useful features (kdump, for example) to continue to
work properly in the secure boot environment without leaving
kexec_load() completely open. That, too, will make the secure
boot restrictions a bit more palatable and increase their chances of being
merged.
Comments (35 posted)
Patches and updates
Kernel trees
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
- Marco Stornelli: pramfs .
(September 9, 2013)
Memory management
Networking
Architecture-specific
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Distributions
By Nathan Willis
September 11, 2013
It is pretty easy to get a new project up and running with Git, but
integrating Git—or any other new version control
system—can be painful for an existing project with an
established code base. Such is the case for Debian, with its tens of
thousands of packages spread across multiple versions of the distribution. Migrating
Debian to a Git-based version-control system would be a herculean
ordeal, if that was even a task that the project was interested in
undertaking. But Ian Jackson recently unveiled a new tool that serves
as a bridge between the official Debian archives and a Git repository,
thus allowing developers to use a Git workflow while remaining fully
integrated with the archive.
The tool is called dgit; Jackson announced version 0.7, the first
"suitable for alpha and beta testers", on August 22. The
concept behind it was hashed out during DebConf13 in mid-August. As
Jackson explained it, the goal was to allow package maintainers and
developers to use "a gitish workflow" if they so desired,
including working with upstream Git repositories and preserving Git
histories, but without forcing a Git-based workflow on anyone who was
happier using the status quo.
The bird's-eye view of dgit is that it treats the Debian archive
(which contains all of the packages that make up a Debian release) as
if it were a remote Git repository. A developer can clone or fetch a
package from the archive, commit and merge changes, and push updates
to the package, all using dgit commands that mirror, in most
ways, the offerings of Git itself. But this functionality is on
demand; if no developer dgit clones a particular package,
there is no Git view of it created—doing so automatically for
every Debian package would consume far too many resources.
Thus, there is quite a bit of work going on behind the scenes to
keep the archive and the dgit view of the package in sync. When a
developer uses:
dgit clone foopackage sid
for example,
dgit initializes a Git repository on Debian's Alioth server, pulls in
the contents of foopackage from the sid
distribution, then constructs the local repository on the developer's
machine. The developer can then use normal Git tools (raw
command-line or otherwise) as desired. When it is time to upload
changes, dgit sbuild constructs the source package.
Then, a dgit push both pushes the current HEAD to
the remote Git repository on Alioth and uploads the source package to
the Debian archive.
Where things get more difficult are those situations when a package
is modified outside of changes made directly on the dgit local branch,
such as with a set of patches. The tool includes a
dgit quilt-fixup command to integrate with the
quilt patch manager
(which lets maintainers keep track of a set of patches that need to be
applied before each upload). The quilt-fixup command creates a
"synthetic commit" which is then added to the Git history
before the package is pushed. However, as Jackson noted in the man
page and on the debian-devel mailing list, this is an imperfect
solution.
Jackson pointed out some peculiarities of quilt that make it
incompatible (at least for the time being) with dgit. For example,
when one uses dpkg-source
to build a source package in Debian's quilt-compatible format, if the result is then
extracted (again using dpkg-source), the contents are not identical to
the original—specifically, there are extra metadata files
generated. This makes it difficult to use quilt to apply a set of
patches and push the results with dgit, so Jackson recommended
steering clear of quilt-formatted source packages altogether.
On the mailing list, Raphael Hertzog took some umbrage at Jackson's
description of this issue as "brain damage" on quilt's part. In the
ensuing discussion, Hertzog and Jackson eventually reached an
impasse. The disagreement boils down to what is considered the
"normal" workflow—specifically, how a developer should manage
both local changes and a set of quilt-managed patches. Hertzog
contends that developers should record their own local changes as a
separate patch in quilt, while Jackson believes local changes should
be orthogonal to those patches managed in quilt. But when using
Jackson's workflow, quilt copes with the local changes by adding
additional metadata, in the form of those the extra files seen by
dkpg-source.
In any case, Jackson
eventually decided to simply work around the oddities that result from
trying to use quilt and dgit together. It is certainly possible for a
developer to use dgit without worrying about the issue, merely by not
bringing quilt into the mix. Of course, asking a developer to start
using a different workflow is rarely a welcome suggestion, but there
is hope that the distinctions will eventually be smoothed over.
There are some other limitations, however. For now, dgit is only
usable by official Debian Developers (DDs); non-DDs cannot even
create a read-only view of a dgit repository. This is due to the
access control setup deployed on the Debian servers; it may be
resolved in the future when Jackson and the system administrators have
sufficient time.
Hertzog also inquired whether there
might be any lessons to learn from Ubuntu's Distributed
Development (UDD) project, which automatically imported all packages in
the Ubuntu and Debian archives into repositories for use with Bazaar.
"Automatic" import is in many ways wishful thinking; as several
reported, Ubuntu found that there are a variety of special cases that
dictate manual intervention to repair an imported package, and it can
be problematic to get the full commit history of each
package—which can involve upstream changes, patches, and commits
made by individual developers. Ubuntu had it easier than Debian
because UDD was limited to a single, Bazaar-based workflow. Since
Debian is (at least for the foreseeable future) committed to giving
its developers and maintainers the freedom to use any workflow they
wish, deploying something like dgit for the entire Debian package
archive would probably require more people-power than the project has.
No doubt many interesting things could be done with the
availability of a Git repository containing the entire Debian archive,
and accessible to the world. Dgit is not likely to reach that stage
any time soon, but, as Jackson pointed out, he wanted something that he
could deploy and use immediately. And it is clearly good news that
Debian developers can begin using dgit now; Git has proven itself to
be the version-control system of choice in free software at large, so
integrating it with one of the premiere free software distributions is
sure to reap benefits for developers and Debian users alike.
Comments (4 posted)
Brief items
Those of us who experienced the heady days of Nokia's forays into Linux will continue to think on the what-might've-been's had Nokia taken a risk and pulled the trigger on something new and innovative. We had a short glimpse into that future when Nokia announced the N9 and MeeGo as their new flagship platform, but the Elopocalypse brought an end to that future. Perhaps the appliances that represent today's mobile device options would include a more open, accessible, and interesting mobile general computing platform had things gone differently.
--
MWKN weekly news
Comments (none posted)
The Linux From Scratch community has announced the release of LFS 7.4.
"
It is a major release with toolchain updates to binutils-2.23.2, glibc-2.18, and gcc-4.8.1. In total, 32 packages (of 62) were updated from LFS-7.3 and changes to bootscripts and text have been made throughout the book."
Full Story (comments: none)
Distribution News
Debian GNU/Linux
Debian Project Leader Lucas Nussbaum presents an update on DPL activities
during August and early September. Topics include DebConf, survey of new
contributors, participation in the OpenZFS initiative, status of MariaDB for
Debian, Outreach Program for Women, and more.
Full Story (comments: none)
There will be a Bug-Squashing-Party in Munich, Germany November 22-24.
Full Story (comments: none)
Newsletters and articles of interest
Comments (none posted)
LinuxInsider
reviews Cr OS, an
openSUSE derivative featuring the Cinnamon desktop and the Chromium
browser. "
Cr OS is a fully functional Linux distro. It has its own repository and package manager to provide software updates.
I was generally pleased with Cr OS. Its lightweight design does not have
many of the advanced features that tend to bog down Linux Mint, but the
Cinnamon desktop definitely provides a Minty look and feel. The only serious impediment is its high rate of incompatibility with wireless hardware."
Comments (4 posted)
Page editor: Rebecca Sobol
Development
By Nathan Willis
September 11, 2013
For many years, FontForge has been the only option
for editing or designing fonts with free software. But FontForge can
be an intimidating tool with a difficult learning curve, and over the
years it had begun to get long in the tooth, with little regular attention
from active developers. The development situation has improved
significantly of late, but those who need a quick and easy-to-learn
open source font editor now have another option in Birdfont.
It's a draw
Birdfont is the work of Johan Mattsson, who announced the project
in December 2012 on the CREATE mailing list, and has subsequently made
a steady series of small releases. The latest is version 0.28, from
August 2013. Birdfont is written in Vala, which is a language often
closely associated with GNOME, but in addition to running
on Linux, builds are provided for Mac
OS X, Windows, and
OpenBSD. Birdfont's feature set is limited in scope (at least when
compared to FontForge or proprietary editors); it initially supported
creating only TrueType fonts, although support for Embedded OpenType
(EOT) and SVG fonts has since followed. Similarly, the initial
emphasis was placed on providing a good drawing canvas, while support
for features like kerning has developed slowly.
Nevertheless, the emphasis on a good canvas for editing glyphs does
pay off: Birdfont's glyph editor is considerably easier to get started
with than FontForge's—particularly for people who come from
Inkscape, Illustrator, or other non-font-specific editing tools. The
drawing area and the tools look modern, are rendered smoothly, and
behave more like the components of a standard drawing application.
To
give a few examples, Birdfont's coordinate grid is set of light-colored
background lines, vector curves are always rendered as black-on-white
lines, and those curves have easily-distinguished
control points which one can grab with the mouse. In contrast, FontForge
does not show a coordinate grid at all (just numeric coordinates in
the toolbar), background guide lines and vectors are the same color,
and control points are rendered in several different shapes and colors
depending on what variety of curve and point they correspond to. The
upshot is that it is easy to pick up Birdfont and just start
"sketching" ideas. FontForge is pickier about how its drawing tools
operate, which lends itself to a different workflow.
Of course, there are reasons—even, one would have to agree,
good reasons—for many of the choices FontForge makes in these
areas. But they do make the application harder to use for
newcomers. Birdfont may yet have to implement user interface changes
if it continues to add features, though, at which point maintaining
simplicity will become a greater challenge. For example, it currently
supports only one drawing layer, which looks very uncluttered. With
multiple layers, the UI must distinguish which layer is active, such
as by adding a layers panel or rendering inactive layers in a
different color—either way, adding more clutter to the interface.
But there are also plenty of niceties than even the most die-hard
FontForge fan would appreciate. For instance, font formats disagree
about whether the outermost contour of a glyph should be oriented
clockwise or counterclockwise (TrueType says clockwise, PostScript and
OpenType CFF say counterclockwise). Of course, a glyph's contour is a
closed curve, but the orientation tells the renderer in which order
the points on that curve will be listed, which is helpful for less
powerful renderers. FontForge makes the orientation of
every contour an explicit property, and if it is incorrect with
respect to the standard of the font exported, then the glyph is rendered wrong; Birdfont
does not make contour orientation a property at all, and simply fills
in the shape regardless of how it was drawn.
On the other hand, there are a number of usability quirks in
Birdfont's editor that may be difficult to get used to. There are
multiple unlabeled tool buttons with ambiguous icons (so that one must
hover the mouse over them to see the help text appear). The contrast
on the toolbar buttons is low, which can make it hard to tell which one
is selected. Some of the buttons also revert to their previous state
after one usage, while others persist. This can be tricky to keep
track of mentally, especially when the low contrast makes them harder
to see.
In addition, some of the tools
do not behave in entirely obvious ways. For instance, when the
"pencil" tool is selected, one can left-click to move or manipulate
existing points on a drawing—but must right-click to add new
points on a new curve, or double-left-click to add new points to an
existing curve. Moving existing points seems more like it
should be the job of the "move" tool, but the "move" tool is used to
move, rotate, or resize whole paths. Separating the point tools from
the path tools is a good way to prevent editing
accidents like deleting a shape when you meant to delete a single
point; Inkscape does the same thing, while FontForge does not
offer such protection. In addition, the Birdfont
"move" tool's on-canvas rotate-and-resize functionality is about as
nice as they come, but, nevertheless, this behavior can be quite
difficult to discover in the current interface.
At this point, there are also several
unimplemented features that are shown in the UI, but cannot be used,
such as the "circle" and "rectangle" tools. There are also a few
peculiar controls that look like tool buttons but are not. They are
in fact numeric sliders that control the grid spacing, stroke width, and
background image scaling. To change any of the settings, one clicks
the "button," holds it down, and moves the mouse cursor up or
down. Being able to change these settings with the mouse is nice, but
the tool button widget is being overloaded to do so.
Several of these quirks could be smoothed over with different
widgets or tool icons, but documentation would go much further. One
can only hope that it is on its way; there are a lot of details that
need description (such as how and what the "Export" function actually
generates), and getting up to speed with the drawing tools would be
much easier with examples. Mattsson has created a few tutorial
videos, which are a start, but leave a lot of uncovered ground where
the newer features are concerned.
Details
Drawing characters is arguably the "fun part" of font editing, but
eventually one must tackle the less glamorous tasks of letterspacing,
kerning, hinting, and adding OpenType features (such as ligatures).
Basic letterspacing can be done on-canvas: two vertical guides (one
for the left "bearing" and one for the right) are available in the glyph view;
one simply grabs them and drags them left or right to change the
amount of space on either side. This is quite painless, but it is
unfortunate that the exact coordinate values are not shown anywhere in
the interface, and setting the left and right bearings to specific
values is not supported.
Birdfont has only recently gained support for working on kerning,
and the situation is similar to that of letterspacing. One can open
up a kerning tab from the menu at the top, type in a few
characters, then move a slider back and forth to set a kerning
adjustment between any pair of adjacent letters. There is a school of
thought which says that the exact numeric values for things like
letterspacing and kerning are not important—after all, the sole
point is to make adjustments until the result looks good to the
decidedly un-scientific human eye. But in practice, not showing such
values in the interface can make working on the font more difficult.
Of course, the "correct" way to do spacing adjustments is to
generate a .TTF or .OTF font and put together test pages for print or
for the web. On that task, too, the current version Birdfont does not
disclose many details or options. Perhaps FontForge exposes too many
export options (especially if one includes its detailed format-validation
settings), but Birdfont is missing some key ones like selecting the
glyph names and whole-font settings. Choosing "Export" from the file
menu lets the user select an output directory, but not to specify what
sort of output is created. As of 0.28, the export includes a .TTF font,
an .EOT font, an .SVG font, and an HTML sample file; all four are
generated with each export. However, I found the export to
be buggy, generating empty .TTFs several times for reasons I have not
yet been able to pin down.
Birdfont will save font projects in its own .bf file format, which
is XML-based. I was not able to find a description of the format, but
it looks more like SVG than it does the XML-based Unified Font Object
(UFO) format. In an email, Mattsson said he plans to eventually move
the application over to UFO, so that he does not have to maintain a
file format in addition to an application.
In another format-related wrinkle, so far, creating kerning pairs from scratch is the only supported
method; Birdfont will not recognize kerning tables that already exist
in the file. There is a short roadmap on
the project's wiki that indicates supporting is coming for some more
font features, including reading existing kerning tables and ligature
support.
Last word
The complaints about missing functionality and interface quirks should not be taken
as criticism of the project. This is code which is still in
development; it is quite natural for there to be many tasks as yet
uncompleted. However, if anyone who finds FontForge daunting is
interested in running Birdfont as an editor, it would be wise to take
some precautions with respect to some of the missing features.
First, the undo command only undoes certain operations and is not
accompanied by a redo command. Second, there is a "Preview" command
available in the File menu; it seems to generate and overwrite a .TTF
in the last-used directory, even though the .TTF is used only to show
how the font looks in a preview window. Thus, if you have recently exported your
work as a .TTF, Preview will overwrite it. Finally, while the drawing
tools do not offer point-precise movements or transformations, you can
at least have Birdfont display the exact grid coordinates of the mouse
cursor by launching the application with
birdfont --show-coordinates.
Birdfont has made remarkable strides in its short life thus far.
In comparison, it is true that FontForge supports almost every font
format and option under the sun (including, for example bitmapped
fonts and Adobe's Multiple Master format), but for many everyday uses
that is clearly overkill. For a lot of casual users, the most that they
will ever want to edit a font is to open up an existing .TTF file and
make a handful of small tweaks: changing an awkward character, adding a
currency symbol, adding a slash to the zero, etc. For those users,
Birdfont is excellent news indeed. It is easy to get started with,
the drawing tools look modern and nice, and there is very little
chance that they will generate output with the wrong settings.
Perhaps Birdfont will add enough features to compete with FontForge
for high-end users, too, but in the meantime it is refreshing to see
another take on the free software font editor, and it is nice to have
an option that targets simplicity and ease of use.
Comments (none posted)
Brief items
We could literally redefine the speed of light in CSS; it is 1,133,073,857,007.87 CSS pixels per second – relativity in CSS makes light travel a bit slower on devices with smaller form factors than traditional PCs, from our perspective, looking into the screen from the real world.
—
Tim Chien
Comments (none posted)
Version 9.3.0 of the PostgreSQL database manager is out. "
This
release expands PostgreSQL's reliability, availability, and ability to
integrate with other databases. Users are already finding that they can
build applications using version 9.3 which would not have been possible
before." See
this article for a
detailed summary of what's new in this release.
Full Story (comments: none)
Version 3.4.0 of the digiKam photo-management application has been released. No new user-visible functionality is incorporated in this version, however, 3.4.0 does mark the debut of a rewritten image-management core that takes full advantages of multiple CPUs and multiple cores. Thus allowing everyone to process their vacation photos far more rapidly, among other uses.
Full Story (comments: none)
Version 5.3 of the LibreJS tool has been released. LibreJS is an extension for Mozilla-based browsers, and is designed to block non-trivial JavaScript that is not demonstrably free software. 5.3 is actually an update that corrects compatibility problems in the just-released 5.2. New features in 5.2 and 5.3 include per-script controls for blocking and allowing content, improved matching of HTTPS URLs, and support for "magnet" links in addition to canonical URLs.
Full Story (comments: none)
Version 2.2.0 of the Slony master-to-multiple-slave replication engine for PostgreSQL has been released. Two major new features are included. Slony now uses the COPY command for replication, which "will result in faster replication times and more efficient resource utilization on the replica for many workloads," and the FAILOVER command "has been made more reliable for complex cluster configurations. The FAILOVER command now supports the failure of multiple nodes at the same time. All users of the FAILOVER should read the documentation to familiarize themselves with these changes."
Full Story (comments: none)
Newsletters and articles
Comments (none posted)
Libre Graphics World takes a look at the just-announced Patchfield audio server for Android. "In a nutshell, Patchfield reuses the cornerstone idea of JACK — creating standalone applications for audio generation and/or processing and connecting them in a patchbay." The fact that Patchfield brings advanced audio-processing capabilities to Android sounds like a plus in many ways, but there are evidently detractors who do not regard JACK's design as something to be closely emulated.
Comments (8 posted)
Page editor: Nathan Willis
Announcements
Brief items
The Free Software Foundation is encouraging users to avoid all Apple
products, in the interest of their own freedom and the freedom of those
around them. "
We urge users to investigate ways to support the use
of mobile devices which do not restrict users' essential freedoms. Such
projects include Replicant, a free
software fork of Android, and F-Droid, an
app repository of exclusively free software for Android. People should
also let Tim Cook at Apple know how they feel."
Full Story (comments: none)
The Free Software Foundation's Defective By Design campaign has compiled a
FAQ to address the most
common misconceptions regarding DRM.
Full Story (comments: none)
Articles of interest
BizBash
talks with Linux Foundation conference organization maven Angela Brown about how she puts together events like
LinuxCon. "
At larger conferences, the foundation will often host a number of smaller related events for different communities within the schedule of the overall event. A smaller event might be a one-day seminar focused on a specific platform, or a hackathon related to a project. "It allows people to rally around a common goal or project they're working on," Brown says. "All of a sudden this 1,500-person conference becomes 100 people collaborating." The foundation might incorporate a lunch for women—often an underrepresented group in tech circles—to meet and network.
Likewise, for a party during LinuxCon this month, Brown chose the House of Blues, partly because it has a number of smaller rooms to break up the party into smaller groups of attendees, who will walk to the venue in their own Mardi Gras-style parade."
Comments (none posted)
Calls for Presentations
PyCon.DE will take place October
14-19 in Cologne, Germany. The event organizers are looking for people
working in projects related to Python to contribute and promote their project.
Full Story (comments: none)
CFP Deadlines: September 12, 2013 to November 11, 2013
The following listing of CFP deadlines is taken from the
LWN.net CFP Calendar.
| Deadline | Event Dates |
Event | Location |
| September 15 |
November 8 |
PGConf.DE 2013 |
Oberhausen, Germany |
| September 15 |
November 15 November 16 |
Linux Informationstage Oldenburg |
Oldenburg, Germany |
| September 15 |
October 3 October 4 |
PyConZA 2013 |
Cape Town, South Africa |
| September 15 |
November 22 November 24 |
Python Conference Spain 2013 |
Madrid, Spain |
| September 15 |
April 9 April 17 |
PyCon 2014 |
Montreal, Canada |
| September 15 |
February 1 February 2 |
FOSDEM 2014 |
Brussels, Belgium |
| October 1 |
November 28 |
Puppet Camp |
Munich, Germany |
| October 4 |
November 15 November 17 |
openSUSE Summit 2013 |
Lake Buena Vista, FL, USA |
| November 1 |
January 6 |
Sysadmin Miniconf at Linux.conf.au 2014 |
Perth, Australia |
If the CFP deadline for your event does not appear here, please
tell us about it.
Upcoming Events
The
LibreOffice
Conference will take place September 25-27 in Milan, Italy. "
Tracks will cover the Open Document Format (ODF); LibreOffice Development; Community Development; Best Practices for Deployments and Migrations; and Building a Business with LibreOffice. For the first time during a conference, there will be a chance of sitting together with LibreOffice developers to hack the code, or just discuss the next feature."
Full Story (comments: none)
The GNU Project will be celebrating its 30th anniversary September 27-29,
in Cambridge, Massachusetts. "
GNU supporters all over the world are
planning their own celebrations, and we've listed them all on the 30th anniversary page. Events
are currently in the works in Buenos Aires, Argentina; Cox's Bazaar,
Bangladesh; Kitchener, Canada; Prague, Czech Republic; Paris, France;
and Tokyo, Japan."
Full Story (comments: none)
The linux.conf.au 2014 will take place January 6-10, 2014 in Perth, Western
Australia. An early list of confirmed speakers has been announced.
Full Story (comments: none)
Events: September 12, 2013 to November 11, 2013
The following event listing is taken from the
LWN.net Calendar.
| Date(s) | Event | Location |
September 12 September 14 |
SmartDevCon |
Katowice, Poland |
| September 13 |
CentOS Dojo and Community Day |
London, UK |
September 16 September 18 |
CloudOpen |
New Orleans, LA, USA |
September 16 September 18 |
LinuxCon North America |
New Orleans, LA, USA |
September 18 September 20 |
Linux Plumbers Conference |
New Orleans, LA, USA |
September 19 September 20 |
UEFI Plugfest |
New Orleans, LA, USA |
September 19 September 20 |
Open Source Software for Business |
Prato, Italy |
September 19 September 20 |
Linux Security Summit |
New Orleans, LA, USA |
September 20 September 22 |
PyCon UK 2013 |
Coventry, UK |
September 23 September 25 |
X Developer's Conference |
Portland, OR, USA |
September 23 September 27 |
Tcl/Tk Conference |
New Orleans, LA, USA |
September 24 September 25 |
Kernel Recipes 2013 |
Paris, France |
September 24 September 26 |
OpenNebula Conf |
Berlin, Germany |
September 25 September 27 |
LibreOffice Conference 2013 |
Milan, Italy |
September 26 September 29 |
EuroBSDcon |
St Julian's area, Malta |
September 27 September 29 |
GNU 30th anniversary |
Cambridge, MA, USA |
| September 30 |
CentOS Dojo and Community Day |
New Orleans, LA, USA |
October 3 October 4 |
PyConZA 2013 |
Cape Town, South Africa |
October 4 October 5 |
Open Source Developers Conference France |
Paris, France |
October 7 October 9 |
Qt Developer Days |
Berlin, Germany |
October 12 October 13 |
PyCon Ireland |
Dublin, Ireland |
October 14 October 19 |
PyCon.DE 2013 |
Cologne, Germany |
October 17 October 20 |
PyCon PL |
Szczyrk, Poland |
| October 19 |
Hong Kong Open Source Conference 2013 |
Hong Kong, China |
| October 19 |
Central PA Open Source Conference |
Lancaster, PA, USA |
| October 20 |
Enlightenment Developer Day 2013 |
Edinburgh, Scotland, UK |
October 21 October 23 |
Open Source Developers Conference |
Auckland, New Zealand |
October 21 October 23 |
KVM Forum |
Edinburgh, UK |
October 21 October 23 |
LinuxCon Europe 2013 |
Edinburgh, UK |
October 22 October 23 |
GStreamer Conference |
Edinburgh, UK |
October 22 October 24 |
Hack.lu 2013 |
Luxembourg, Luxembourg |
| October 23 |
TracingSummit2013 |
Edinburgh, UK |
October 23 October 24 |
Open Source Monitoring Conference |
Nuremberg, Germany |
October 23 October 25 |
Linux Kernel Summit 2013 |
Edinburgh, UK |
October 24 October 25 |
Embedded LInux Conference Europe |
Edinburgh, UK |
October 24 October 25 |
Xen Project Developer Summit |
Edinburgh, UK |
October 24 October 25 |
Automotive Linux Summit Fall 2013 |
Edinburgh, UK |
October 25 October 27 |
vBSDcon 2013 |
Herndon, Virginia, USA |
October 25 October 27 |
Blender Conference 2013 |
Amsterdam, Netherlands |
October 26 October 27 |
PostgreSQL Conference China 2013 |
Hangzhou, China |
October 26 October 27 |
T-DOSE Conference 2013 |
Eindhoven, Netherlands |
October 28 October 31 |
15th Real Time Linux Workshop |
Lugano, Switzerland |
October 28 November 1 |
Linaro Connect USA 2013 |
Santa Clara, CA, USA |
October 29 November 1 |
PostgreSQL Conference Europe 2013 |
Dublin, Ireland |
November 3 November 8 |
27th Large Installation System Administration Conference |
Washington DC, USA |
November 5 November 8 |
OpenStack Summit |
Hong Kong, Hong Kong |
November 6 November 7 |
2013 LLVM Developers' Meeting |
San Francisco, CA, USA |
| November 8 |
CentOS Dojo and Community Day |
Madrid, Spain |
| November 8 |
PGConf.DE 2013 |
Oberhausen, Germany |
November 8 November 10 |
FSCONS 2013 |
Göteborg, Sweden |
If your event does not appear here, please
tell us about it.
Page editor: Rebecca Sobol