By Jake Edge
June 3, 2009
Allowing browsers to run native code downloaded from a web site has some
attractions, at least at first blush. But, once some thought is put into
it—or the serious security problems with Microsoft's
ActiveX are recalled—the security flaws of the scheme become readily
apparent. Google is resurrecting the idea in their Native Client (NaCl)
research project, but rather than rely on trust, as ActiveX does, NaCl
takes steps to verify the code before running it. As a weblog posting
by Matasano Security describes, there are rather substantial technical
barriers to overcome, but, even then, there are still some fairly serious
repercussions to running native code from an untrusted site.
The reasons that native code is so attractive are that it allows for much
better performance along with access to graphics and a user interface that
isn't HTTP-based. One of the NaCl demos is a port of Quake so that it can
run in the browser. Certainly games are one place where NaCl is
attractive, but, also, for any existing program—at least those that
aren't
written in Java,
Flash, or Silverlight—porting it to a new language is not required.
For those who think that essentially all applications will eventually be
delivered by the web, NaCl (or something like it) seems required.
But, as malware developers know, the x86 architecture has lots of ways to
obscure the operation of a program in order to try to elude any kind of
automatic vetting. The instructions are of variable length and malicious
programs can jump anywhere in the stream, not just at the instruction
boundaries found by a disassembler. In addition, x86 programs can execute
from data, so that malicious programs can write some code to memory and
jump there. These kinds of things cannot be determined by just examining the
program binary, so Google leveraged some earlier work
[PDF] to restrict the kinds of programs NaCl will execute.
Basically, NaCl requires that the code be structured such that it
can be verified automatically. That means that disassembling the
code must produce a stream of recognizable instructions and that jumps must
land at the beginning of one of those instructions. In addition,
self-modifying code is disallowed. With those restrictions in place, NaCl
can verify that the code doesn't do anything that is disallowed.
NaCl then enforces some additional rules, disallowing memory management
hacks that could fool the verifier and requiring that all system calls go
through a "gate" in the first 64K of the code. Only certain calls are
allowed through the gate, which is how NaCl protects against arbitrary code
being executed.
Google has created a
patched version of GCC that will create an ELF-format file which follows
the rules.
All of that may sound enticing, but Matasano puts a definite damper on
enthusiasm for the technique. In some ways, it is similar to what Java
applet sandboxes do, but Java has been around for quite some time, so many
of the problems with its implementation have been found and fixed. Google
sponsored a contest
to try to shake out some of the problems with NaCl. Matasano participated
and the
blog post is essentially a report of what they and others found.
The basic problem is that bugs in the verifier, loader, or trusted system
call gate can generally be immediately turned into exploits to run
arbitrary code. The posting outlines a number of problems that they or
other contest teams found. Until the NaCl components reach a level of
maturity similar to—or even beyond—that of the Java applet
sandbox, running native code in the browser is going to be a dicey
proposition. A particular area of concern is that the system call gate
must do its job based on what call is being made and the contents of the
memory being passed, which is a much harder job than the equivalent for the
Java sandbox (which is expressed in terms of Java classes and data
structures).
But, even if all of the bugs with NaCl itself were found and fixed—an
impossible task—there is still an architectural hole that was
specifically removed from consideration in the contest: side-channel
attacks. There are a number of attacks against keys and other sensitive
information that can be made using timing
analysis. By timing repeated
executions of the code of interest, cache effects as well as branch
prediction information can be extracted, which can then be used to recover
keys or other information.
While the side-channel attacks are probabilistic in nature, they
get better with repetition. If an attacker were able to add that kind of
analysis as part of a popular game, for example, it would have ample
opportunity to run. Since the kind of abilities required by the
side-channel program are not very different than other, legitimate programs
that NaCl would want to run, there is little that can be done to stop this
kind of abuse. Whether it is a practical attack is hard to judge, but
undoubtedly some attackers are already looking at it.
It seems likely that any security-conscious user is not going to be too
interested in running code in NaCl anytime soon—if ever.
Unfortunately, the same folks who are willing to run ActiveX programs from
random internet sites might be quite willing to do the same with NaCl.
That could lead to an ugly security breach of some kind, but one could
argue that it is not
really any worse than things are today. Running untrusted code is
dangerous and there aren't many ways around that.
(
Log in to post comments)