The One Laptop Per Child platform was always going to present some
interesting security challenges. Millions of identical, network-attached
systems will be deployed into some remote parts of the world, where they
will be managed by people who are not security experts. The systems will
be obvious targets for theft, self-propagating malware, and the creation of
botnets. None of these activities feature highly on the OLPC project's
list of educational objectives, so it stands to reason that some
significant thought needs to go into how to prevent them.
The person charged with the OLPC's security thinking is Ivan Krstić. The
initial results of his work, done with help from Simson Garfinkel, have now
been posted with a request
for comments. Ivan and company have come up with a platform named
"Bitfrost," which, it is hoped, will keep OLPC systems out of trouble and
available for their owners. At this point, there is quite a bit of
information on what Bitfrost will do, but very little on how it will be
After an introduction on the shortcomings of the traditional Unix file
permissions model, the Bitfrost specification gets into the overriding
principles and goals. The principles are consistent with the approach the
OLPC project has taken so far: security cannot depend on hardware or
software design secrets, it must be possible for users to gain complete
control over the system, security cannot depend on the user being able to
read, and the security mechanism must be unobtrusive. "Unobtrusive" does
not mean that security won't ever get in the way; instead, it means that
the user will not be pestered by popups with security-related questions.
The associated goals include no user passwords, no unencrypted
authentication, a system which is secure when it is first powered on, a
very limited use of public-key encryption infrastructure, and no permanent
The process starts at manufacturing time, when each laptop will be equipped
with unique, randomly-generated serial and UUID numbers. The laptop starts
out in a non-functional, deactivated state; making it work involves the use
of a special activation key generated from the serial number and UUID.
The customer countries will have lists of serial and UUID numbers; from
those it will be able to create the activation keys. The plan is for these
keys to be generated in small batches and shipped, on a USB key, to the
destination schools. Once installed on a server there, the keys can be
used to enable the laptops sent specifically to that school. The purpose
here is to deter thieves who would grab pallets of laptops; without the
activation keys, those laptops would only be useful as spare parts.
There is an interesting step which happens once a laptop is activated and
On first boot, a program is run that asks the child for their name,
takes their picture, and in the background generates an ECC key
pair. The key pair is initially not protected by a passphrase, and
is then used to sign the child's name and picture. This information
and the signature are the child's 'digital identity'.
The laptop transmits the (SN, UUID, digital identity) tuple to the
activation server. The mapping between a laptop and the user's
identity is maintained by the country or regional authority for
anti-theft purposes, but never reaches OLPC.
The ability to locate the proper owner of an OLPC system has obvious
advantages; it should help to keep each laptop in the proper set of small
hands. On the other hand, the potential for a repressive government to
misuse this data seems real; it would be sad if the OLPC systems could not
be used for truly free communications without fear about who might be
At the BIOS level, security will be handled as described in this LWN article from last
August. The BIOS will only be rewritable when the new image has been
signed with a special cryptographic key. There will be "developer keys"
available which will enable a laptop's owner to reflash the BIOS, but, in
general, the children will not have that functionality available to them.
At the Linux level, security will be handled through a set of privileges
assigned to each installed program. Privileges look much like Linux
capabilities, but they are not capabilities; they are a new layer of
protections which will be implemented via some other means. Some of the
expected privileges will include:
- P_SF_CORE: the ability to modify the core software on the
system. This privilege is normally off, and cannot be enabled without
a special developer key. There is also P_SF_RUN, which
allows modification of the currently-running system software. This
privilege works by way of a copy-on-write filesystem mechanism;
software changes are saved as copies. This mechanism makes it easy to
revert the system to its initial state should the need arise.
- P_NET: a group of controls on network access. Programs can
be denied access to the net entirely, or they can have any of a wide
range of bandwidth, time-of-day, and destination restrictions applied
- P_MIC_CAM: programs can be granted (or denied) the ability to
use the camera and the microphone. There will also be LEDs (not
present on the current test systems) which will illuminate whenever
the camera or microphone are in use. So it should be difficult to use
an OLPC system to spy on its owner.
- There is a whole set of quotas designed to prevent a program from
using too much processor time, flash space, etc.
In addition, every program will be run in an isolated mode:
A program on the XO starts in a fortified chroot, akin to a BSD
jail, where its visible filesystem root is only its own constrained
scratch space. It normally has no access to system paths such as
/proc or /sys, cannot see other programs on the system or their
scratch spaces, and only the libraries it needs are mapped into its
scratch space. It cannot access user documents directly, but only
through the file store service, explained in the next section.
Again, details on just how the sandbox will be implemented are scarce for
now - though your editor has heard from Mr. Krstić that it will be
based on Linux-VServer.
The "file store service" is described as a sort of object-oriented
database for documents, "similar in very broad terms to the Microsoft
WinFS design". All access to files from programs goes by way of a
user dialog; there should be no way for a program to modify files outside
of its own scratch area without the user knowing about it.
There is also an optional anti-theft mechanism:
It works by running, as a privileged process that cannot be
disabled or terminated even by the root user, an anti-theft daemon
which detects Internet access, and performs a call-home request --
no more than once a day -- to the country's anti-theft servers. In
so doing, it is able to securely use NTP to set the machine RTC to
the current time, and then obtain a cryptographic lease to keep
running for some amount of time, e.g. 21 days. The lease duration
is controlled by each country.
If a machine has been reported as stolen, the "anti-theft server" will
instruct it to shut down hard and go back into the deactivated state. The
same thing will happen eventually if the stolen system is kept isolated
from the net. This mechanism should help to deter thefts; one can only
hope that it is sufficiently well designed that nobody figures out how to
trigger it as a denial of service attack.
The phone-home feature can be disabled - but only in the presence of
a developer key.
One feature which will not be built into the laptops is filesystem
encryption. The CPU in the OLPC XO laptop is simply too slow to perform
that task without bogging down the system entirely. This issue will be
reconsidered in the future. The OLPC developers have also explicitly
decided to stay out of the content-filtering business.
In summary, the security model developers have this to say:
[W]e believe we've imbued the OLPC security system with cunning and
more magic art than other similar works of craftmanship -- but not
for a second do we believe we've designed something that cannot be
broken when talented, determined and resourceful attackers go forth
harrying. Indeed, this was not the goal. The goal was to
significantly raise the bar from the current, deeply
unsatisfactory, state of desktop security.
If the implementation lives up to the specification, chances are that the
project will have achieved that goal. The OLPC platform is an ambitious
experiment from beginning to end, and its developers have, once again, not
wasted the opportunity to do something interesting with it. If the
security ideas incorporated into the OLPC systems work out as desired, it
would not be surprising to see at least some of them adopted by other
desktop environments. This could be another case where the OLPC project
creates benefits for a large group of people beyond its immediate target.
Comments (61 posted)
Toward the end of his
, Andrew Tannenbaum put up a few slides on the
runtime cost of the microkernel approach. He had quite a few benchmarks,
but the bottom line was that the microkernel architecture
used in Minix imposed a roughly 5-10% performance penalty, depending on
what one is trying to do. While operating systems hackers would normally
cringe at the prospect of paying a 5% penalty, to many people this could
seem like a good deal: give up 5-10% of a processor which is mostly idle
anyway in exchange for a more reliable system.
In truth, neither the claim of a 5-10% penalty nor that of higher
reliability has been proved in any definitive way. At the conference,
a number of attendees questioned the way in which the benchmarks had been
done, suspecting that Minix had been benchmarked against a monolithic
version of itself. If that is the case, the benchmarks will capture the
context switching costs but will have nothing to say about the costs of the
message-passing architecture. To get a true measure of the penalty of
the microkernel architecture, it was suggested, one should benchmark Minix
As it turns out, the linux.conf.au swag bag contained a CD with Minix 3.1.2a
on it; one might almost think the organizers had this sort of test in mind.
So your editor came home with the intention of installing that version of
Minix and doing a bit of benchmarking. That job has now been done, and we
can talk about how Minix and Linux compare.
Time for a brief digression:
once, some years ago, your editor actually had a spare moment in which to
see how nethack was coming along. One must stay on top of all the
important development projects, after all. The graphics have improved, the
game contained more monsters than ever, etc. But there is an especially
amusing moment when one drops into a level and is informed of a sense of
having entered a more primitive place. The graphics on that level are
VAX-era rogue, and the whole thing feels rough and, well, primitive.
A similar feeling will come over a Linux user who tries to get things done
on a Minix system. It is a POSIX-like environment, and it has a working version
of the X Window system (but don't go in expecting GNOME or KDE), but that's
as far as it goes. The
shell is painful to use, many commands are
missing, and one runs into obstacles on every path. Since Minix
does not really do paging, memory quickly runs out if too many processes
are run; your editor had not seen the old "not enough core" message in
quite some time. One of the harder things to do on Minix, it turns out, is
to build any sort of non-trivial software package - even after figuring out
that the default C compiler is crippled but gcc can be found under
/usr/gnu. As a result, your editor had to give up on most of his
attempts to build current benchmarks; they just would not compile on Minix.
In the end, your editor succeeded in building and running two benchmark
programs: IOtest and UnixBench. Neither seems to be recent enough to have
a currently-maintained web page. IOtest is a disk exerciser, evidently
intended originally as a tool for driver developers. It's
useful for exercising drives in a serious way;
it also produces performance numbers on the side. UnixBench was developed
by Byte in the 1990's, and hasn't seen a whole lot of work since. It
remains, however, a useful way to get a snapshot of the relative speeds of
many operating system functions.
The benchmarks were run on an AMD Athlon 1700 system using an unremarkable
ATA disk. There are three partitions on the disk: one for the operating
system, one for swap (Linux only, since Minix does not support it), and one
for destructive disk tests. The partitioning was not changed between the
installations. Minix does not support partitions larger than 4GB (who
could ever need more than that?) so the disk tests were restricted to 4GB
on both systems. The Minix tests were done on a full installation of Minix
3.1.2a; the Linux side was represented by a late-September
Debian Etch snapshot running a 2.6.17 kernel.
The IOtest read test simply performs random reads of varying sizes,
starting with one process and going up from there. IOtest can run a large
number of competing processes, but your editor limited it to four so as to
avoid running into Minix's memory limitations. For the curious, the full Minix results and Linux results are available. The bottom line
is that the results are nearly comparable: for all practical purposes, the
two systems performed about the same. Similar things can be said about the
results (Minix, Linux) of the read/write test, which are
summarized in the plot to the right (the dashed line represents Minix).
Comparable results would be expected with a benchmark like this, since it
will be dominated by the drive's seek performance. The portion of the disk
being exercised (only 4GB, remember) was not enough to demonstrate a
difference in I/O scheduler implementations. The disk never comes near its
peak I/O rate. So the main conclusion to draw from these results is that
Minix does not get terribly in the way.
The UnixBench results (raw results: Minix, Linux) paint a rather different
picture. These results are summarized in the plot to the left; the upper
bar for each test represents Linux.
The measured system call overhead for Minix is a full ten times
higher than the value for Linux. The file copy tests ran between two and
ten times faster on Linux. Pipe throughput differed by a factor of seven;
Minix was 140 times slower at process creation. The difference in shell
script execution performance, however, was 1.4 - in Minix's favor. One
assumes that the rather simple shell provided by Minix is, at least, faster
One can argue that Minix is a new and unfinished system which has not, yet,
had the benefit of a great deal of performance tuning. There is doubtless
some merit to that claim; the Minix folks will probably find a number of
ways to make things faster. On the other hand, it would not be
unreasonable to argue that Linux, by supporting much greater functionality
on a far wider range of hardware, has every right to be slower - but it's
not. Linux is quite a bit faster; the Minix folks certainly ran benchmarks
which showed a 5-10% difference, but they were not benchmarking against
Dr. Tanenbaum made the claim that only a computer geek would accept better
performance if that trade brought with it lower reliability. By that
reasoning, it doesn't matter that Minix is much slower than Linux on the
same hardware; Minix is aiming for a different goal. But people do care
about performance; the fact that Dr. Tanenbaum felt the need to put up
benchmark results suggests that he cares too. Trading some performance for
reliability could well be a good deal. When one compares Minix (in its
current state) to Linux, however, the performance difference is large, and
the increased reliability is unproven.
Comments (88 posted)
Last week's reader survey drew just about 1000 responses
approximately 25% of our entire subscriber base. We appreciate the time
you all took to tell us what you think about LWN. Fully digesting the
responses will take some time, but there are a few things which jump out
About 90% of those who responded were individual subscribers. As it
happens, almost 25% of LWN subscribers get their access through group
subscriptions, but fewer of them took the time to respond. Perhaps people
on group subscriptions tend to be more busy, or perhaps fewer of them
follow LWN every week. In any case, the opinions of group subscribers
were somewhat underrepresented.
A full 50% of the responses came from Europe, compared to 39% from North
America and 5% from Australia and New Zealand. It has been a while since
we had accurate statistics of where our readers are coming from - the
current LWN server isn't up to the task of recording all that information.
Once upon a time, North Americans and Europeans made up approximately equal
parts of our reader base. It would be interesting if the Europeans have
now pulled ahead.
There were few surprises in the responses on which parts of LWN readers
enjoy the most. It seems maybe we'll have to keep the Kernel Page after
all. Seriously, though, the most interesting result may have been the
relatively low scores given to the weekly Announcements Page. One of the
things we have noticed over the years is that a surprising number of items
from that page end up being mentioned in the annual LWN timeline feature.
Important stuff goes on that page, but it is currently set up as a sort of dumping
ground at the very end of the Weekly Edition. Some changes may be called
Quite a few readers were surprised to discover the index of kernel articles. The
index was prominently announced on the Kernel Page when it was created, and
it's linked at the top of the kernel
subsection page. But, clearly, it is not easy enough for people to
a number of respondents suggested that the time has come for a site
redesign. Trust us, we know that. The current design is mostly unchanged
since its unveiling in June, 2002, but it really dates back to January,
1998, when LWN first hit the net. Our purpose was to create a clean,
easy-to-read, text-oriented site, and the result has served us well for
some time. But it is definitely time to rethink things. That will be a
slow process, however.
Complaining about comment quality has been a popular activity in recent
times, but there was not a great deal of interest in either of the proposed
comment filtering mechanisms. A few readers really do want a blacklisting
capability, though. Instead, there were a number of requests for a
feature which would highlight comments posted to an article since the last
time one looked. Both blacklisting and highlighting (and many other
potential features) run into one practical problem: the single
1300 MHz Duron processor which runs the entire LWN site is already feeling
a little stressed. The more complicated content - weekly edition pages,
long comment trees, etc. - is aggressively pregenerated and cached; adding
per-user rendering would defeat that caching and force those pages to be
rendered on the fly. For various reasons,
upgrading the server involves far more expense than just buying a new box.
The day when we have to make that leap is coming, though.
There was a suggestion that the entire LWN archive be closed to
non-subscribers. That is not a step we expect to take. Closing the
archive would make LWN disappear from the net for all practical purposes,
with little in the way of expected benefit. It is also very much our goal
to increase the amount of useful information available to the community as
a whole, and that runs counter to the idea of a closed archive.
For those who called for more Grumpy Editor articles:
you have been heard. Those articles are a lot of work, and times have been
busy, which is why they have been relatively scarce recently. There
are a couple of topics queued up, however, so expect the Grumpy Editor to
make another appearance here before too long.
In summary: the information you have provided is useful - we are most
grateful. We will be looking at it closely as we ponder changes to LWN to
help make it more successful in the future. What will not change, however,
is our commitment to high-quality writing and high-quality coverage of the
Linux and free software community from within.
Comments (42 posted)
Page editor: Jonathan Corbet
Next page: Security>>