I disagree with your overall summary, Jake. You're right that there are basically two approaches to running code securely -- making sure the author of the code was honest and competent (what you refer to in this article as "relying on trust"), and making sure that the code can't do things that you don't want it to do. I think in the long run the latter is a much more promising strategy.
Put it this way: making sure there are no security holes in your interpreter/sandbox/operating system is a really hard, to be sure, but it is much easier than making sure that nobody who ever touched the code was malicious or error-prone.
In unix, for example, we rely on the operating system to run some code (a process owned by a user) without letting that code do absolutely anything it wants (permissions, etc.). There are plenty of ways that the operating system can fail at this, accidentally allowing the code to escape from the constraints and take over added authority. We have a long history of those sorts of holes, and we may never get it perfectly right. But on the other hand it is awfully useful for some things to be able to run code under a separate user account without thereby letting it gain access to all user's authorities. Much *more* useful, for those cases, than the alternative of having a human inspect the code before allowing it to run, or of trying to ensure that all patches to the code come from trusted authors.
Another way to think of this is that the scale of authors of code has changed dramatically in the last 30 years. When the multics and unix security paradigms were developed, there were probably hundreds or at most thousands of people who typically authored code that you might actually want to use.
The "trusted authors" approach might have been useful if there were only a few hundred or a few thousand people who typically generated source code that you wanted to run, and you could be suspicious and cautious if a stranger posted a patch. Today, that approach seems extremely limiting.
(Hierarchical-trusted-authors approaches such as Microsoft code-signing or Debian gpg-keys don't really scale up to modern-day needs either, in my opinion -- they err on both sides, by excluding good code from distribution and by allowing malicious or buggy code into distribution. The bigger the scale, the larger both kinds of errors will be. Sure, you can "try harder" to reduce one or both kinds of error, and this can help a little, but the whole approach is just inherently non-scalable.)
Fortunately, other people have realized the inherent limitations of the relying-on-trust approach and are now actively pursuing the alternative of running-code-safely. Google NaCl is a big, exciting step forward on that axis. (Google caja is another.)
Frankly, the side-channel issue seemed like Matasano grasping at straws, to me. Side-channels surely exist, and can be important in cases where you are juggling secrets, but there are plenty of uses where they don't matter, and for those uses Google NaCl seems to be coming along nicely.
For comparison, those same side-channel attacks would also mean that a user account on a multi-user unix system might be able to steal secrets such as passwords or private keys from another user. Cryptographers have been developing some defenses against that sort of thing, but if you have extremely valuable secrets then you should indeed probably not allow those secrets to be used on physical hardware which is shared by processes owned by other people.
Oh, I just realized that the same side-channel argument probably applies to virtualization. *Probably* similar attacks can extract your secrets from your Amazon EC2 instance, if that instance finds itself sharing a CPU with an instance owned by an attacker. Or maybe not -- such attacks are inherently a noisy, probabilistic situation and I don't recall any report of such a thing being exploited in the wild.