The recent problem with
prelink in Fedora Rawhide has led some to
wonder about what advantages pre-linking actually brings—and whether those
advantages outweigh the pain it can cause. Pre-linking can
reduce application start up time—and save some memory as
well—but there are some downsides; not least the possibility of an
unbootable system as some Rawhide users encountered.
The advantages are small enough, or hard enough to completely
quantify, that it leads to questions about whether it is justified as the
default for Fedora.
Linux programs typically consist of a binary executable file that refers to
multiple shared libraries. These libraries are loaded into memory once and
shared by multiple executables. In order to make
that happen, the dynamic linker (i.e. ld.so) needs to change the
binary in memory such that any addresses of library objects point to the
right place in memory. For applications with many shared
libraries—GUI programs for example—that process can take some
The idea behind pre-linking is fairly simple: reduce the amount of time the
dynamic linker needs to spend doing these address relocations by doing it in
advance and storing the results. The prelink program processes
ELF binaries and shared libraries in much the same way that ld.so
would, and then adds special ELF sections to the files describing the
relocations. When ld.so loads a pre-linked binary or library, it
checks these sections and, if the libraries are loaded at the expected
location and the library hasn't changed, it can do its job much more
But there are a few problems with that approach. For one thing, it makes
the location of shared libraries very predictable. One of the ideas behind
address space layout
randomization (ASLR) is to randomize these locations each time a
program is run—or library loaded—so that malicious
programs cannot easily and reproducibly predict addresses. On Fedora and
Red Hat Enterprise Linux (RHEL) systems, prelink is run every two
weeks with a parameter to request random addresses to alleviate this
problem, but they do stay fixed over that time period.
In addition, whenever applications or libraries are upgraded,
prelink must be run again. The linker is smart enough to
recognize the situation and revert to its normal linking process when
something has changed, but the
advantage that prelink brings is lost until the pre-linking is
the kernel randomly locates the VDSO (virtual dynamically-linked shared object)
"library", which, on 32-bit systems, can overlap one of the
libraries, requiring some address relocation anyway. Overall, pre-linking
is a bit of a hack, and it is far from clear that its benefits are
substantial enough to overcome that.
Fedora and Red Hat Enterprise Linux (RHEL) enable pre-linking
by default, while most other distributions make prelink available,
but seem unconvinced that the benefits are substantial enough to make it
the default. Because it is a very system-dependent feature, hard
performance numbers are difficult to find. It certainly helps in some
cases, but is it really something that everyone needs?
Matthew Miller brought that question up on
the fedora-devel mailing list:
I see [prelink] as adding unnecessary complexity and fragility, and it makes
forensic verification difficult. Binaries can't be verified without being
modified, which is far from ideal. And the error about dependencies having
changed since prelinking is disturbingly frequent.
On the other hand, smart people have worked on it. It's very likely that
those smart people know things I don't. I can't find any good numbers
anywhere demonstrating the concrete benefits provided by prelink. Is there
data out there? [...]
Even assuming a benefit, the price may not be worth it. SELinux gives a
definite performance hit, but it's widely accepted as being part of the
price to pay for added security. Enabling prelink seems to fall on the other
side of the line. What's the justification?
Glibc maintainer Ulrich Drepper noted that
pre-linking avoids most or all of the cost of relocations, while also
pointing out that the relatively new symbol table hashing feature in GCC
gain for pre-linking. He also described an additional benefit: memory
pages that do not require changes for relocations will not be copied (due
to copy-on-write) and can thus be shared between multiple processes running
the same executable. But his primary motivation may have more to do with
his work flow: "Note, also small but frequently used apps benefit. I
run gcc etc a lot
and like every single saved cycle."
The effect of pre-linking can be measured by using the LD_DEBUG
environment variable as Drepper described. Jakub Jelinek, who is the
author of prelink, posted some
results for OpenOffice.org Writer showing an order of magnitude
difference in the amount of time spent doing relocations between
pre-linked and regular binaries.
Those results are impressive, but, at least for long-running programs,
start up time doesn't really dominate—desktop
applications, or often-used utilities, are the likely benefactors. As
Miller puts it:
If I can get a 50% speed up to a program's
startup times, that sounds great, but if I then leave that program running
for days on end, I haven't actually won very much at all -- but I still pay
the price continuously. (That price being: fragility, verifiability, and of
course the prelinking activity itself.)
For 32-bit processors, though, which are those most likely to benefit from
the memory savings, there is still the VDSO overlap problem. John Reiser
did an experiment using cat and
glibc needed to be dynamically relocated fairly frequently:
This means that glibc
must be dynamically relocated about 10% of the time anyway,
even though glibc has been pre-linked, and even though /bin/cat is
near minimal in its use of shared libraries. When a GNOME app uses
50 or more pre-linked shared libs, as claimed in another thread on
this subject, then runtime conflict and expense are even more likely.
There doesn't seem to be a large interest in removing the prelink
default for Fedora, but one has to wonder, if the savings are as large
and widespread as people seem to think, why other distributions have been
reluctant to adopt it. Part of the reason may be the possibility of a
prelink bug rendering systems unbootable or reluctance to rely
upon something that requires modifying binaries and libraries, regularly,
to keep everything in sync. The security issues may also play into their
thinking, though Jelinek argues that
security-sensitive programs should be position-independent executables
(PIE) that are not pre-linked, and thus have ASLR done for every execution.
While not impossible, a problem like Rawhide suffered seems unlikely to
occur in more polished, non-development releases. Though prelink
does provide a benefit, it may be a bit hard to justify as time goes on.
For some, who are extremely sensitive to start up time costs, it may make a
great deal of sense, but it may well be that for the majority of users, the
annoyance and dangers are just not worth it.
Comments (54 posted)
At the Gran Canaria
Desktop Summit (GCDS) on July 4th, Nokia's Maemo marketing manager Quim Gil announced
that, beginning with the Harmattan release expected in 2010, the company
would adopt the Qt toolkit for Maemo's application and user interface
layer, replacing the Hildon user interface and
GTK components that have served that function since the platform's debut.
The announcement was met by a mixed reaction from the Maemo development
community, which is already faced with significant API incompatibilities
scheduled for the still-to-come Fremantle release, but most agreed that the
move was inevitable in light of Nokia's acquisition of Qt creator
his talk during the combined GNOME and KDE portion of GCDS, starting with
an overview of the current Maemo platform and explaining the upcoming
changes in Fremantle before addressing the move towards Qt. "Fremantle" is
the code name for Maemo 5.0, and although the software development kit
(SDK) has been released, the software and the new hardware devices on which
it will run have not. Fremantle will retain many of the same components on
which earlier Maemo devices were built, but also introduces some new ones,
including the PulseAudio sound
server, an X.org X Window implementation,
and the Upstart init
Fremantle will also support a community-maintained Qt toolkit, but the
Maemo environment and Nokia applications will remain on GTK and Hildon.
Starting with Harmattan, Gil explained, the GTK/Hildon and Qt frameworks
will swap places: the core applications and interface will be written for
Qt, and GTK/Hildon will be supported through a community-maintained stack.
He went on to explain that Nokia will continue to use and contribute to
numerous middleware components from the GNOME stack that have always been
pieces of Maemo — including GConf, GVFS, and GLib, among others —
and emphasized that the move to Qt would bring Maemo a step closer to being
a traditional Linux platform, rather than a partially-compatible, niche
Regarding the decision to move from GTK/Hildon to Qt, Gil's talk cited
two factors. First, the Qt framework is available on desktop systems and
handheld devices like Symbian-powered phones, making it easier for
application developers to use one tool chain and one API across a variety
of platforms (although GTK is available on desktop systems, Hildon is
designed for mobile devices). Second, Nokia plans on furthering
development of the Qt toolkit through its QtMobility
project, which will develop entirely new APIs for mobile devices running
Symbian, Maemo, or Windows CE.
Responses to Nokia's announcement hit Maemo-related blogs and discussion
boards almost immediately. Some expressed optimism about the switch, but
many more expressed their reservations — not about Qt itself, but
about the sudden switch from one toolkit to another, and the accompanying
switch from C to C++ as the core language. In the Maemo Talk forum thread discussing
the announcement, several posters expressed concern that neither
Fremantle nor Harmattan would officially support both the old and
the new toolkit, thus leaving third-party developers without a smooth
Furthermore, the difficulty of the toolkit switch between Fremantle and
Harmattan is compounded by the fact that Fremantle will break compatibility
with the Maemo 4.x-series, thus forcing two consecutive rewrites onto
developers. Others in the thread questioned the timing of the
announcement, since Fremantle has yet to be released. According
to forum member "pycage", "This story gives me the impression that
Fremantle, from a developer's point of view, is already obsolete even
before it was released."
Murray Cumming of embedded Linux company Openismus observed
on his blog "its clearly a rather arbitrary and disruptive
decision. I suspect that some managers are dictating the Nokia-wide use of
Qt even in projects that dont otherwise share code, without
understanding the difficulty of the change. UI code is never just a thin
layer that can be replaced easily, even if it looks like just another block
in the architecture diagram. Likewise, hundreds of C coders cannot become
capable C++ coders overnight. I expect great difficulties and delays as a
result of the rewrites [...]"
Gil, however, defended the move to forum members, noting
that "providing commercial support on both frameworks [GTK/Hildon and
Qt] for both releases [Fremantle and Harmattan] implies an amount of work
that we simply can't nor want to commit [to]." He responded
to developers' fears of two consecutive releases breaking compatibility by
assuring them that the transition would be smooth because of the consistent
middleware layer, and noted that the advance timing of the announcement
itself was an effort to give early warning so that developers could have
adequate time to prepare. He also said
that it was too early to draw conclusions about compatibility across
releases, since many of the details of Harmattan are still unknown.
Nokia and Qt ... and Symbian
Uncertainty about transitioning between two user interface toolkits
aside, no one seemed surprised by the announcement that Maemo would move to
Qt, given that Nokia acquired
Trolltech — subsequently renamed Qt Software — in January of
2008. As Gil alluded, moving Maemo to Qt allows Nokia to more efficiently
repurpose its engineering resources toward the development of a single
software stack. More importantly, however, by using Qt on Maemo Nokia will
be "eating its own dogfood," and can thus more actively promote Qt as a
commercial solution to application developers.
Several in the community noted that, in the market for Qt, Maemo is
considerably smaller than desktop Linux, which itself is considerably
smaller than the smartphone operating system Symbian, which Nokia acquired
in June of 2008. Shortly after the acquisition, Nokia announced plans to release
Symbian as open source software, and set up the Symbian Foundation to manage the code.
The company released its first preview of Qt running on top of Symbian in
October of that year, and has continued to develop it as a "technology
preview," highlighting its cross-platform capabilities.
Maemo Talk forum member "eiffel" speculated
that Nokia's plan might be to somehow merge Maemo and Symbian into a single
OS, but Gil's talk presented a more straightforward plan: he described
Maemo as the platform best suited to high-end mobile hardware like mobile
Internet devices (MID), occupying the middle slot in between
Symbian-powered phones and full Linux desktops. Maemo bloggers Daniel
Gentleman and Jamie Bennett
both observed that by acquiring Qt and Symbian, Nokia was better
positioning itself to compete against the full range of handheld devices
that are soon expected to be running Android.
Gil commented in the Maemo Talk discussion that Nokia plans to develop
new Qt APIs specific to handheld devices under the QtMobility project. The
QtMobility site lists three new APIs: connections
and a service
framework. Source code for all three is available from a public Git
repository, although none have yet been bundled for stable release. Gil indicated
that the new APIs will be shared across the platforms, but that Maemo and
Symbian will not share other code. "The interest is to align on the
API level. Then each platform will push its own identity and strengths
based on the target users and form factors of the products released. This
means that UI and pre-installed application might differ, and in some cases
Moving Maemo from GTK/Hildon to Qt may be painful in the short term
— at least for some developers — but the long term benefits of
a single toolkit for both Linux-based and Symbian-based platforms no doubt
made the decision easy for Nokia. The big question remains —
regardless of whether it uses GTK/Hildon or Qt — where does Nokia
intend to take Maemo itself? The platform has plenty of fans in the open
source community, but it remains a niche OS.
Since its debut, Maemo has shipped on only three Nokia devices: the 770,
N800, and N810 Internet Tablets, the last of which was launched in 2007.
Although a community effort
exists to extend the platform's hardware support, and it can run on a BeagleBoard development motherboard, to
date no non-Nokia consumer product has ever adopted Maemo as its operating
system. Fremantle will reportedly launch on a new generation of device,
running on OMAP3-based
hardware that Nokia notably does not refer to as an Internet Tablet like
its predecessors, opting instead to use generic terminology like "device"
It would require considerable reading between the lines to speculate
that Nokia intends to ship Maemo on its high-end smartphones, especially
considering that the company has continued to push Symbian as its platform
of choice for its high-end N-series and E-series phones four years
after launching Maemo. But unless Nokia plans to offer more products in
the MID product category, it does seem strange to expend resources
maintaining an entire operating system for a single device, especially
while touting the multi-platform reach of Qt as one of its strengths.
Comments (10 posted)
While Linux systems generally have a good reputation for uptime, there are
sometimes unavoidable reasons that a reboot is required. Typically, that is
because of a kernel update, especially one that fixes a security hole.
Companies that have long-running processes, or those who require
uninterrupted availability, are not particularly fond of this requirement.
A new company, Ksplice, Inc. has come
up with a way to avoid these reboots by hot-patching a running kernel.
The technique used by the company, unsurprisingly called Ksplice, is free
software, which we looked
at last November on the Kernel page. (An earlier look, from April 2008, may also
be instructive). The basic idea is that by doing a comparison of
the original and patched kernels, one can build a kernel module that will
patch the new code into the running kernel.
For simple code changes, the process is fairly straightforward. Each kernel
is built with a special set of flags to simplify determining which functions
have changed as a result of the patch. Those changes are packaged up into
the module, and then applied when the module is loaded. Then there is the small
matter of ensuring that the kernel is not currently executing any of the
functions to be replaced. In order to do that, the kernel is halted while
each thread is examined, if none are running the affected code—or
have a return address into the code on their stack—the patch
is made and the kernel can go on its way. Otherwise, Ksplice delays for a
short time and tries again, eventually giving up if it cannot satisfy that
There are several kinds of changes that are much more difficult to handle,
particularly data structure changes. For those, someone needs to analyze
the changes and write code to handle munging the data structures
appropriately. Ksplice has an infrastructure that allows this data
structure manipulation to be done while the kernel is halted, but the code
itself is, or can be, non-trivial. To a great extent, it is the knowledge
of how to do this with Ksplice that the company is offering as a service.
As a test of the technology, the Ksplice developers looked at all of the
security problems listed for the kernel over a three-year period (May 2005
to May 2008). Of the 64 Common
Vulnerabilities and Exposures (CVE) entries for the kernel that had an
impact worse than denial of service, Ksplice was able to patch 56 without
any additional code being written. The other eight could be handled with
a small amount of code—an average of 17 lines per patch.
As a further demonstration of the Ksplice technique,
Ksplice, Inc. is currently offering a free-beer service for Ubuntu 9.04
(Jaunty Jackalope) users. Ksplice Uptrack
will allow those users to update their kernels
without rebooting them. The Ksplice folks will be tracking the Ubuntu
kernel git tree, turning those changes (security and bug fixes) into
modules that can be retrieved and applied with the Uptrack client. As
the FAQ, Uptrack will support the
latest release of Ubuntu: 9.04 for now, switching over to 9.10 (Karmic
Koala) when that is released.
As noted, Ksplice itself is free software,
available under the GPLv2, and the Uptrack client is as well. That leads
to a service-oriented free software business model for Ksplice, Inc. While
their exact plans are not yet clear, providing similar updates for
enterprise kernels (RHEL and SUSE), but charging for those, would seem an
obvious next step.
Other areas for expansion include other operating systems as well as
user-space applications. In an interview, Waseem Daher,
co-founder and COO of Ksplice, described the company's goal:
The long term vision is that, at the end of the day, all updates will be
hot updates — updates that don't require a reboot or an application
restart. This is actually a big problem because if you look at technology
used in data centers, no-one has a good solution for software updates, from
as low level as your router or SAN, up to your virtualization solution, the
operating system, the database, and the critical applications. Right now,
all these updates require you either to reboot the system or restart the
This is a big pain point for sysadmins because, on the one hand you have to
apply the updates so that you can fix important security problems, but on
the other if you don't then you're vulnerable. When you do apply them,
though, there's downtime and that's lost productivity. There's a real
cost associated with the downtime. We want to take the technology that
we've developed and use it to make life easier in the data center. That's
broad vision for where we're going with the company, and we're starting
That's a rather ambitious vision, but one that seems in keeping with where
things are headed. No matter how fast booting gets, it is still a major
annoyance, for servers or desktops. Even restarting applications,
particularly things like database servers or desktop environments, leads to
lost time and productivity. Whether Ksplice, Inc. can expand their
offerings to reach that goal is an open question.
One of the problems that Ksplice will face is competition. In the Linux
world, that could come from distributors deciding to start making Ksplice
modules themselves, and either charging their customers for them, or adding
that capability to their subscription-based support offerings. In the
proprietary, closed source world, Ksplice will have to work with the
vendors of operating systems and applications so that it can access the
source code. Those vendors are most certainly going to want a piece of the
pie for that access.
There may also be technical hurdles. One botched kernel update, which led
to introducing a serious flaw—security or otherwise—could ruin
the company's reputation. That, in turn, might make it much harder to
convince new customers. Hot-patching is a subtle, difficult problem to
On the other hand, Ksplice has an excellent pedigree; started by four MIT
students based on co-founder Jeff Arnold's master's thesis. Ksplice also won
MIT's $100,000 entrepreneurship competition—against some
stiff competition, one would guess. Arnold's reasons for looking at the
problem will resonate with system administrators everywhere: he delayed
patching an MIT server to avoid downtime on a busy system, so an attacker
took advantage of that window.
It will be interesting to watch both Ksplice and the general idea of
hot-patching over the coming years. When Ksplice was first introduced, a
on the technique was noted on linux-kernel, along with protestations of
rather old (PDP-11) prior art. How that plays into the fortunes of Ksplice
and others who come along will be interesting—potentially
Comments (18 posted)
Are you able to sell web advertisements, knowledgeable about free software and LWN, and interested in taking on some work? If so, LWN.net would like to talk to you; please read this
Comments (none posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Crying wolf over OpenSSH; New vulnerabilities in dbus, dhcp, djbdns, libtiff,...
- Kernel: Communicating requirements to kernel developers; Rootless X; A new way to truncate() files
- Distributions: Fedora: (another) proposal for extended support; Fedora 9 EOL; Slackware for ARM; Hardening CentOS; Jolicloud; Torinux.
- Development: Uzbl: a browser following the UNIX philosophy, KDE growth metrics, new versions of Rivendell, Hatta, Ardour, GNOME, gerbv, Freecell Solver, Moovida, HylaFAX, Sage, freebase-python, Unladen Swallow, Padre, Pydev, oejskit.
- Press: Gran Canaria Desktop Summit coverage, Google Chrome OS, USPS uses SUSE, interviews with Aurora and Shuttleworth, Linux audio discussions, VirtualBox review.
- Announcements: GNOME annual report, USP to support Openmoko, MontaVista boots in 1 sec, State of Text Rendering, PHP TestFest coverage, EuroBSDcon, LLVM dev meeting, LPC microconferences, openSUSE Hack Week.