User: Password:
|
|
Subscribe / Log in / New account

Google's Native Client

Google's Native Client

Posted Jun 4, 2009 2:33 UTC (Thu) by zooko (guest, #2589)
Parent article: Google's Native Client

I disagree with your overall summary, Jake. You're right that there are basically two approaches to running code securely -- making sure the author of the code was honest and competent (what you refer to in this article as "relying on trust"), and making sure that the code can't do things that you don't want it to do. I think in the long run the latter is a much more promising strategy.

Put it this way: making sure there are no security holes in your interpreter/sandbox/operating system is a really hard, to be sure, but it is much easier than making sure that nobody who ever touched the code was malicious or error-prone.

In unix, for example, we rely on the operating system to run some code (a process owned by a user) without letting that code do absolutely anything it wants (permissions, etc.). There are plenty of ways that the operating system can fail at this, accidentally allowing the code to escape from the constraints and take over added authority. We have a long history of those sorts of holes, and we may never get it perfectly right. But on the other hand it is awfully useful for some things to be able to run code under a separate user account without thereby letting it gain access to all user's authorities. Much *more* useful, for those cases, than the alternative of having a human inspect the code before allowing it to run, or of trying to ensure that all patches to the code come from trusted authors.

Another way to think of this is that the scale of authors of code has changed dramatically in the last 30 years. When the multics and unix security paradigms were developed, there were probably hundreds or at most thousands of people who typically authored code that you might actually want to use.

Today there are probably millions, tens of millions, or perhaps even hundreds of millions of people who write code that you (or someone) might find useful. If we include javascript on web pages, macros in spreadsheets, and so on, there may soon be a billion people (if everything goes well) who occasionally write some code that someone else may occasionally find to be useful.

The "trusted authors" approach might have been useful if there were only a few hundred or a few thousand people who typically generated source code that you wanted to run, and you could be suspicious and cautious if a stranger posted a patch. Today, that approach seems extremely limiting.

(Hierarchical-trusted-authors approaches such as Microsoft code-signing or Debian gpg-keys don't really scale up to modern-day needs either, in my opinion -- they err on both sides, by excluding good code from distribution and by allowing malicious or buggy code into distribution. The bigger the scale, the larger both kinds of errors will be. Sure, you can "try harder" to reduce one or both kinds of error, and this can help a little, but the whole approach is just inherently non-scalable.)

Fortunately, other people have realized the inherent limitations of the relying-on-trust approach and are now actively pursuing the alternative of running-code-safely. Google NaCl is a big, exciting step forward on that axis. (Google caja is another.)

Frankly, the side-channel issue seemed like Matasano grasping at straws, to me. Side-channels surely exist, and can be important in cases where you are juggling secrets, but there are plenty of uses where they don't matter, and for those uses Google NaCl seems to be coming along nicely.

For comparison, those same side-channel attacks would also mean that a user account on a multi-user unix system might be able to steal secrets such as passwords or private keys from another user. Cryptographers have been developing some defenses against that sort of thing, but if you have extremely valuable secrets then you should indeed probably not allow those secrets to be used on physical hardware which is shared by processes owned by other people.

Oh, I just realized that the same side-channel argument probably applies to virtualization. *Probably* similar attacks can extract your secrets from your Amazon EC2 instance, if that instance finds itself sharing a CPU with an instance owned by an attacker. Or maybe not -- such attacks are inherently a noisy, probabilistic situation and I don't recall any report of such a thing being exploited in the wild.

Anyway, the fact that NaCl is susceptible to side-channel attacks is rather unremarkable -- so is Linux, Amazon EC2, The JVM, the Javascript implementation in your web browser, and probably every actually deployed access control system.


(Log in to post comments)

Google's Native Client

Posted Jun 4, 2009 3:11 UTC (Thu) by jake (editor, #205) [Link]

> I disagree with your overall summary, Jake.

Hmm, interesting. I don't find much that I disagree with in your message, so either I didn't communicate well (likely) or your disagreement is not in an area that I considered to be central to the article.

I think it is a promising strategy to try to confine programs to doing "what we want", but that is a horribly difficult and error-prone process.

I guess you are more optimistic than I about removing the parser/loader/system call gate bugs in any kind of near-term timeframe. The side-channel attacks exist, and could be problematic, but that is just a demonstration of an inherent, architectural weakness of the scheme. The real problems are likely to come from all of the rest of it.

Bottom line, for me, is that I think I am about as likely to run NaCl binaries from untrusted sources anytime soon as I am to run ActiveX controls. Maybe I am behind the times, though.

jake

Google's Native Client

Posted Jun 4, 2009 4:02 UTC (Thu) by elanthis (guest, #6227) [Link]

I see no reason not to have a hybrid approach. I like signed binaries not because it tells me that everyone who touched it was a Good Guy, but because it lets me know who (or which organization, at least) is vouching for the code. I trust the Linux kernel sources to be the foundation of my security-sensitive systems. That doesn't mean I trust every person who's touched the code, but it does mean that I trust the people reviewing and signing off on the code. And I want to know that my system is running an unmodified version of official Linux (or an unmodified distribution kernel, at least) and not some random hacker-supplied version.

That's where code signing comes in. It tells me that yes, this is the version of FooApp that I meant to download and run, and that yes, it really is from FooSoft.

What code verification does is an extra step to help protect me for cases when I find need to run code from some specific (but not yet fully trusted) software provider. I've never run anything from FooSoft before, but I find myself needing to run FooApp because it is the only software that does what I need. I'm not going to do a full source analysis (be honest, you almost certainly haven't even looked at the source for 99% of the software you run) because I have better things to do with my life. So I rely on code verification (like Google's) along with the positive reputation of FooSoft (and the fact that I know the FooApp I'm about to run really is the real deal from FooSoft) to keep me safe.

I can't guarantee I'm 100%. You can never do that. I might even have a box with no Internet access but my assistant/wife/janitor/whoever might decide to abuse his or her physical access.

ALL security -- be it computer security, a deadbolt, or whatever -- is about risk management. You can never completely remove security holes, but you can reduce the ease of finding them, increase the complexity to utilize them, and decrease the potential damage of abusing them.

Sandboxing, code verification, and code signing all are tools to help manage risk. No one method is fail proof and no one method is better than all methods.

Google's Native Client

Posted Jun 4, 2009 8:29 UTC (Thu) by tzafrir (subscriber, #11501) [Link]

Yeah. But automatically trusting your computer to trust anybody Microsoft (or Google, or whatever) trusts well enough to grant a certificate is a bit too many levels of indirection of trust.


Copyright © 2018, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds