LWN: Comments on "Security in the 20-teens" https://lwn.net/Articles/371719/ This is a special feed containing comments posted to the individual LWN article titled "Security in the 20-teens". en-us Wed, 01 Oct 2025 11:42:32 +0000 Wed, 01 Oct 2025 11:42:32 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Countering the trusting trust attack https://lwn.net/Articles/406198/ https://lwn.net/Articles/406198/ paulj Took a while, but I wrote up those views on "Diverse Double-Compiling" and stuck them online <a href="http://pjakma.wordpress.com/2010/09/20/critique-of-diverse-double-compiling/">here</a>. Mon, 20 Sep 2010 14:53:14 +0000 Security in the 20-teens https://lwn.net/Articles/373978/ https://lwn.net/Articles/373978/ anselm <blockquote><em>For a security perspective, the PNG decoder shouldn't have access to network sockets..</em></blockquote> <p> The PNG decoder shouldn't be allowed to <em>open</em> new network sockets. However, a file descriptor open for reading is a file descriptor open for reading. It doesn't matter much whether there is a disk or a web server at the other end. </p> Thu, 11 Feb 2010 14:32:57 +0000 Security in the 20-teens https://lwn.net/Articles/373933/ https://lwn.net/Articles/373933/ renox <div class="FormattedComment"> <font class="QuotedText">&gt;You replace the file descriptor of a file being written with that of an open network connection,</font><br> <p> For a security perspective, the PNG decoder shouldn't have access to network sockets..<br> <p> <font class="QuotedText">&gt;And inside a web browser (the most obvious thing to attack) the idea of "non-executable" is laughable.</font><br> <p> Agreed, that's why Chrome's design is really a nice change here, even if it doesn't go far enough: AFAIK Flash isn't properly 'shielded' from the rest of the system..<br> <p> </div> Thu, 11 Feb 2010 09:36:27 +0000 Security in the 20-teens https://lwn.net/Articles/373928/ https://lwn.net/Articles/373928/ renox <div class="FormattedComment"> <font class="QuotedText">&gt;What we need is simpler systems that we can write without bugs.</font><br> <p> Need? For security perhaps but history has shown that ,as time goes by, we use systems which have more and more features which is hard to reconciliate with the need for simpler systems..<br> <p> <p> </div> Thu, 11 Feb 2010 09:22:02 +0000 Countering the trusting trust attack https://lwn.net/Articles/373758/ https://lwn.net/Articles/373758/ hppnq So, how about "yum update"? ;-) Wed, 10 Feb 2010 09:45:06 +0000 Security in the 20-teens https://lwn.net/Articles/373539/ https://lwn.net/Articles/373539/ mrdoghead <div class="FormattedComment"> So what if I can't change the machine code indeed. With a mere browser-level deception and injection you can have the user change the system for you at the next restart, alerting you by whatever protocol you prefer silently at the next time thereafter that the machine comes in range of a radio connection or wire that your new machine awaits. And while developers and their tools are a prime target of people who want access to our machines and every system I know of has purposely commited "flaws", as they're described when exposed, the machine code is and has been where the real action is. Hardware is a cesspool of backdoors and security defeaters, some legally imposed and others not. There's much money and interest riding on machines being indefensible. Do people have the stomach to advocate against governments, corporations, and criminals too? When law enforcement is on the other side requiring indefensibility? And remember, a piece of working, innocuous code is just a context shift and reparsing away from being quite malicious, no recompiling required.<br> </div> Mon, 08 Feb 2010 22:55:40 +0000 Security in the 20-teens https://lwn.net/Articles/373353/ https://lwn.net/Articles/373353/ njs <div class="FormattedComment"> You misread :-). Certainly git doesn't hash the whole repo, it uses the chained hashing trick (the "pointers" you mentioned). This subthread is about what happens if you don't trust hashes -- you certainly can't use the chained hashing trick.<br> </div> Sun, 07 Feb 2010 03:09:43 +0000 Security in the 20-teens https://lwn.net/Articles/373344/ https://lwn.net/Articles/373344/ vonbrand <p> You are mistaken. E.g., git doesn't hash the whole repo each time I commit something, what is hashed as a commit is just the contents of a file containing pointers (as SHA-1 hashes) of its parents and any file contents referenced. You can also GPG-sign a tag for added security. Sun, 07 Feb 2010 01:26:10 +0000 Countering the trusting trust attack https://lwn.net/Articles/373241/ https://lwn.net/Articles/373241/ nix <div class="FormattedComment"> Of course paulj's attack is possible in theory. We just need strong AI <br> first.<br> <p> </div> Fri, 05 Feb 2010 23:05:19 +0000 Countering the trusting trust attack https://lwn.net/Articles/373238/ https://lwn.net/Articles/373238/ nix <div class="FormattedComment"> You misunderstand. I'm not saying 'if you prove that this compiler is old <br> then you will invariably detect the Thompson hack' I'm saying 'if it is <br> likely that this compiler is old then your chances of detecting the <br> Thompson hack go way up'.<br> <p> (And the Thompson hack *was* specifically relating to quined attacks on <br> compilers and other code generators. Viruses are a much larger field, with <br> Thompson hacks as a small subset. It is possible they are converging, but <br> I see little sign of it: attacking compilers isn't profitable because <br> they're relatively uncommon on the man in the street's machine.)<br> <p> </div> Fri, 05 Feb 2010 23:03:03 +0000 Countering the trusting trust attack https://lwn.net/Articles/373214/ https://lwn.net/Articles/373214/ Baylink <div class="FormattedComment"> Because I am a believer in the traditions of science, yes, I think it would be an excellent idea if you wrote up formally your problems with his paper...<br> <p> which I *promise* I'm going to read, tonight while I wait for a server upgrade to finish. :-)<br> <p> And certainly any level of the stack can be attacked, and I understand that was his point. But one either has to say "there's no practical way for me to validate the microcode of the CPU, and thus there's a practical limite to what I can verify", or one has to -- in fact -- do that validation.<br> <p> If one can.<br> <p> As we note on RISKS regularly, there are two issues at hand here: "pick your own low-hanging fruit", ie: make sure you apply extra security balm equally to all layers of your problem (as adjusted by your threat estimates at each layer), and "know your CBA": the amount of security at all levels you apply has to be in keeping with not only your threat estimate, but with what the bad guys can *get*.<br> <p> This is, in particular, the part of the issue that terrorists throw monkey wrenches into: trying to inspire asymmetrical responses to what are, objectively, low-level threats. Your opponent wears himself out on the cape and never sees the sword. Bruce Schneier likes to address this issue.<br> </div> Fri, 05 Feb 2010 21:52:09 +0000 Countering the trusting trust attack https://lwn.net/Articles/373211/ https://lwn.net/Articles/373211/ paulj <div class="FormattedComment"> Thompson implementing his attack as a compiler attack is a detail, primarily <br> because source code was the normal form of software interchange but the <br> basic compiler toolchain obviously still required passing around binaries. In <br> short it was the *only* place he could have implemented an attack by <br> subverting binaries. His paper is explicit that the compiler attack is merely a <br> demonstration of a more fundamental problem of having to place trust in <br> computer systems. Particularly, he mentions microcode as a possible level of <br> attack - clearly a completely different thing from compiler level and indication <br> that Thompson was making a very general point.<br> <p> To think that Thompson's attack is only about compilers is surely to miss the <br> point of a classic paper.<br> <p> Also, I don't expect clairvoyance. Indeed, you miss my point about which <br> direction the attacker is going.<br> <p> I think perhaps I should properly write up my criticism...<br> </div> Fri, 05 Feb 2010 21:42:55 +0000 Countering the trusting trust attack https://lwn.net/Articles/373180/ https://lwn.net/Articles/373180/ Baylink <div class="FormattedComment"> <font class="QuotedText">&gt; How does it get better exactly? Old software doesn't come sandwiched, ossified between rock strata that can further attest to its obvious age.</font><br> <p> Sure it does. :-)<br> <p> There are lots of things which make it difficult to run really old software on newer platforms, and the more obstacles you place in the way of a notional IRIX Trusting-attack implementor, the less likely you make an outcome positive to him.<br> <p> <font class="QuotedText">&gt; You're still going to have to determine whether or not the bag of bits you have before you really is the same as that old compiler you want to put your faith in. You'll have to trust your md5sum binary (oops) and you'll have to trust MD5. Oops. And you're still trusting the original compiler author.</font><br> <p> Yes, but what you're trusting him to do *now* is to have written a compiler which could properly identify and mangle a compiler which did not even exist at that time. And compilers are sufficiently different from each other syntactically that I don't think that attack is possible even in theory, though clearly, "I don't think" isn't good enough for our purposes here. :-).<br> <p> <font class="QuotedText">&gt; The "the old author can't have thought of future compilers" argument seems weak. Viruses are much more sophisticated these days - there's no need the attack has to be limited to specific implementations of software.</font><br> <p> Well, I think that depends on which attack we're actually talking about here, and "virus" doesn't really qualify. The Trusting attack was a compiler-propagated Trojan Horse, a much more limited category of attack than "viruses these days", and therefore even harder to implement.<br> <p> I'm not sure why failing to expect clairvoyance from an earlier-decade's attack author is a weak approach, either. :-)<br> </div> Fri, 05 Feb 2010 19:44:48 +0000 Countering the trusting trust attack https://lwn.net/Articles/373172/ https://lwn.net/Articles/373172/ paulj <div class="FormattedComment"> How does it get better exactly? Old software doesn't come sandwiched, <br> ossified between rock strata that can further attest to its obvious age.<br> <p> You're still going to have to determine whether or not the bag of bits you have <br> before you really is the same as that old compiler you want to put your faith in. <br> You'll have to trust your md5sum binary (oops) and you'll have to trust MD5. <br> Oops. And you're still trusting the original compiler author.<br> <p> The "they old author can't have thought of future compilers" argument seems <br> weak. Viruses are much more sophisticated these days - there's no need the <br> attack has to be limited to specific implementations of software.<br> <p> I know David's paper frames the problem so that the attack in fact does have <br> that limitation, but that seems an unjustified restriction of Thompson's attack.<br> </div> Fri, 05 Feb 2010 19:33:38 +0000 Security in the 20-teens https://lwn.net/Articles/373168/ https://lwn.net/Articles/373168/ paulj <div class="FormattedComment"> Did SGI publish secure hashes of your IRIX software? <br> <p> If yes, I bet it's using MD5 at best. Hashes seem to have quite limited lifetimes.<br> <p> If no, how can you know the system today is as it was before? If you say "cause <br> it's been sitting in my garage", then how can I repeat your result? Perhaps you <br> will offer a compiler verification service, but then we're still back to Thompson's <br> point, surely?<br> </div> Fri, 05 Feb 2010 19:19:41 +0000 Security in the 20-teens https://lwn.net/Articles/373096/ https://lwn.net/Articles/373096/ tialaramex <div class="FormattedComment"> Read the Prologue to "A Fire Upon The Deep". Ultimately the difference between acting on some untrusted data and executing untrusted code is only a slight matter of degree.<br> <p> Suppose the buffer that you overflow is next to a variable named 'fd'. You replace the file descriptor of a file being written with that of an open network connection, and suddenly data intended to stay local pours uncontrollably out onto the Internet...<br> <p> The moment progam behaviour deviates from what was intended by the programmer / user you have a potential security hole. If you're lucky it amounts to nothing, and you can invent countermeasures to make that more likely, but it's not safe to bet on it, and the more resourceful and determined the attacker, the more certain they'll find a way to make it work.<br> <p> And inside a web browser (the most obvious thing to attack) the idea of "non-executable" is laughable. So what if I can't change the machine code, I can scribble on the "mere data" like the trusted Javascript, Flash or Java byte code, which will get executed for me by a virtual machine and have the advantage of being portable.<br> </div> Fri, 05 Feb 2010 13:30:35 +0000 Security in the 20-teens https://lwn.net/Articles/373056/ https://lwn.net/Articles/373056/ Ford_Prefect <div class="FormattedComment"> "Some of these problems (yet another PNG buffer overflow, say) appear to have a relatively low priority, but they shouldn't."<br> <p> I wonder why such attacks are still relevant - just about every modern processor now allows you to mark only code pages as executable and read-only (NX bit and the like).<br> </div> Fri, 05 Feb 2010 05:56:34 +0000 Countering the trusting trust attack https://lwn.net/Articles/373034/ https://lwn.net/Articles/373034/ Baylink <div class="FormattedComment"> In particular, this works very well if your check-compiler was shipped *when your target compiler/platform did not even exist yet*.<br> <p> It would be hard to have hot-wired an early-90s IRIX compiler to break GCC4/Linux.<br> </div> Fri, 05 Feb 2010 00:30:03 +0000 Countering the trusting trust attack https://lwn.net/Articles/373026/ https://lwn.net/Articles/373026/ nix <div class="FormattedComment"> David also described in a recent post how you can ensure that your <br> compiler groups weren't maliciously cooperating: make sure your compilers <br> are very different ages. This will only get *better* as the years roll <br> past, especially once Moore's Law grinds to a halt: if one compiler is a <br> hundred years older than the other, unless there's an immortal on the <br> development team there's no *way* they share members. (These days of <br> course this gap is impractical because computers are changing too fast.)<br> <p> </div> Thu, 04 Feb 2010 23:54:54 +0000 Countering the trusting trust attack https://lwn.net/Articles/373014/ https://lwn.net/Articles/373014/ bronson <div class="FormattedComment"> <font class="QuotedText">&gt; Your approach still rests in complete trust in one compiler</font><br> <p> No, it doesn't. David described this in an ancestor post. It just rests on the assumption that a single group of attackers can't subvert every single one of your compilers.<br> </div> Thu, 04 Feb 2010 23:11:20 +0000 Security in the 20-teens https://lwn.net/Articles/373006/ https://lwn.net/Articles/373006/ dwheeler <div class="FormattedComment"> <font class="QuotedText">&gt; There's nothing to stop the author of an compiler subverting its binaries such that *generally* infects all binaries it touches, such that those binaries then infect all other binaries they touch (e.g. by hooking open), and this infection could also introduce system-binary specific attacks as/when it detected it was running as part of those programmes.</font><br> <p> An author can do that, but such an author risks instantaneous detection. The more general the triggers and payloads, the more programs that include corrupted code... and thus the more opportunities for detection.<br> <p> For example, if compiling "hello world" causes a corrupted executable to be emitted, then you can actually detect it via inspection of the generated executable. Even if the system shrouds this, examining the bits at rest would expose this ruse.<br> <p> Besides, as I talk about in the dissertation, the "compiler" you use does NOT need to simply include a compiler as it's usually considered. You can include the OS, run-time, and compiler as part of the compiler under test. You need the source code for them, but there are systems where this is available :-).<br> <p> I have an old SGI IRIX machine that I hope to someday use as a test on a Linux distro with glibc and gcc. In this case, I have high confidence that the IRIX is as-delivered. I can feed it the source code, and produce a set of executables such as OS kernel, C run-time, and compiler as traditionally understood. If I show that they are bit-for-bit identical, then either (1) the SGI IRIX system executable suite when used as a compiler has attacks that work the same way against the Linux distro written many years later, or (2) the Linux distro is clean.<br> <p> I talk about expanding the scope of the term "compiler" in the dissertation.<br> <p> <font class="QuotedText">&gt; I.e. in this discussion we're assuming DDC means you need to subvert 2</font><br> compilers. However that's not the case, nor is it even supported by the<br> thesis being discussed.<br> <p> Sure it is, and the thesis proves it. However, be aware that I very carefully define the term "compiler". In the dissertation, a compiler is ANY process that produces an executable; it may or may not do other things. For example, a compiler may or may not include the OS kernel, runtime, etc. Anything NOT included in the compiler-under-test is, by definition, not tested. If you want to be sure that (for example) the OS kernel doesn't subvert the compilation process, then you include it as part of the compiler-under-test during the DDC process.<br> <p> <p> </div> Thu, 04 Feb 2010 22:39:23 +0000 Accurate quote https://lwn.net/Articles/372988/ https://lwn.net/Articles/372988/ man_ls He said: "Application developers have historically been intolerant of systems that change their security policy on the fly." It was me who was missing some context; in fact it was some silly grammar mistake on my part. I thought "their" referred to "systems", not to "application developers", and didn't see how AppArmor changes its own security policy on the fly. It doesn't; it changes <b>application developer</b>'s security policy. And yes, it is annoying when that happens. Thu, 04 Feb 2010 21:11:55 +0000 Security in the 20-teens https://lwn.net/Articles/372981/ https://lwn.net/Articles/372981/ dlang <div class="FormattedComment"> way back up the thread the statement was made that using SELinux for many processes on one machine was as secure as having the processes on separate machines separated by firewalls.<br> <p> This is an example of capability that you could have to filter communication between apps on different machines that you do not get with SELinux securing things on one machine.<br> <p> as for what this would be useful for.<br> <p> if you have apps that expect things to be text files and throw arbitrary binary data at them you may find a flaw in them and be able to do things as the owner of that process. If you make sure that such bad data can not get to the app you eliminate an entire class of exploits.<br> </div> Thu, 04 Feb 2010 20:39:11 +0000 Quote candidate https://lwn.net/Articles/372971/ https://lwn.net/Articles/372971/ eparis123 <p>Yes, this was the one I meant. The relation I find is that an application developer (me, innocently working on a MYSQL program) got bitten heavily in the worst of times.</p> <p>Maybe I did not understand the quote context very well too.</p> Thu, 04 Feb 2010 19:12:54 +0000 Security in the 20-teens https://lwn.net/Articles/372829/ https://lwn.net/Articles/372829/ dgm <div class="FormattedComment"> Exactly what purpose would this be useful for?<br> <p> </div> Thu, 04 Feb 2010 10:12:06 +0000 Security in the 20-teens https://lwn.net/Articles/372808/ https://lwn.net/Articles/372808/ eric.rannaud <div class="FormattedComment"> <font class="QuotedText">&gt; Bernstein's proposed solution is to minimize the amount of "trusted code"</font><br> <font class="QuotedText">&gt; by putting most of the program in some kind of sandbox. Using seccomp or</font><br> <font class="QuotedText">&gt; running software in a virtual machine are two ways to sandbox code. He</font><br> <font class="QuotedText">&gt; also wants to minimize the overall amount of code, to make it more</font><br> <font class="QuotedText">&gt; auditable.</font><br> <p> I would like to remind everyone that it is exactly how Google Chrome <br> behaves (or Chromium the open source version that runs on Linux), using <br> seccomp.<br> <p> All the HTML parsing, Javascript interpretation, image rendering, page <br> rendering happens in a very tight sandbox. A vulnerability in a PNG library <br> will not result in a breach of the system. Firefox does nothing of the <br> sort, quite sadly.<br> <p> Chrome is the web browser the OpenBSD project would have designed. It <br> relies on privilege separation everywhere (and a sandbox on top of that, to <br> limit the impact of OS-level security flaws, like a buggy syscall). Its <br> design is similar to OpenSSH.<br> <p> This is the right model. A PDF viewer should be designed that way, as well <br> as an email client. In this context, so-called webapps become counter-<br> intuitively *more* secure than local apps that run with $USER privileges. <br> And remember than with HTML5 localStorage, so-called webapps don't actually <br> have to store your data with a remote server. Webapps are not usually <br> designed that way, but they could. And there is of course NaCl, a Google <br> browser plugin that can run native applications in a sandbox.<br> <p> It is certainly quite ironic that Google was apparently attacked through <br> either an IE flaw or an Acrobat Reader flaw. By design, Google Chrome is <br> more secure against the first class of attacks, and there has been talk of <br> adding a sandboxed native PDF renderer to Chrome, but that hasn't been done <br> yet... <br> <p> See <a href="http://dev.chromium.org/chromium-os/chromiumos-design-docs/security-">http://dev.chromium.org/chromium-os/chromiumos-design-doc...</a><br> overview and LWN's <a href="http://lwn.net/Articles/347547/">http://lwn.net/Articles/347547/</a><br> <p> NB: Google Chrome is now available on Linux. For yum users, follow the <br> instructions at <a href="http://www.google.com/linuxrepositories/yum.html">http://www.google.com/linuxrepositories/yum.html</a> and:<br> yum install google-chrome-unstable<br> </div> Thu, 04 Feb 2010 08:55:13 +0000 Countering the trusting trust attack https://lwn.net/Articles/372788/ https://lwn.net/Articles/372788/ hppnq <em><blockquote>For example, a malicious compiler cM may have triggers that affect compilations of its source code, but not for another compiler cQ. So you can use cM to compile the source code of cQ, even though cM is malicious, and have a clean result.</blockquote></em> <p> Eaxactly. But any of the N program-handling components of the build system may be subverted (and not necessarily the same one at each compilation, I suppose), so in order to make a reasonable assumption you have to make sure that none of the N components harbours a payload or trigger. <p> So you have to verify the linker, loader, assembler, kernel, firmware -- i.e., you have to be on <em>completely</em> independent platforms, for both the compilation and verification. I can't see how you can reasonably assure that this is indeed the case, unless you make the assumption that enough components can be trusted. <p> Which you can't, unless you literally assemble everything yourself. ;-) <p> Obviously, <em>practically</em> there is a lot you can do to minimize the chance that someone unleashes the Thompson attack on you. But you can't reduce this chance to zero, so the question is the same as always: is an attacker motivated enough to break through your defense? I am quite sure there are compilers that are <em>not</em> public, to make this particular barrier more difficult. But those are not used to build global financial or even governmental infrastructures. <p> Anyway, I'll shut up now and read the dissertation, it is an interesting topic. Thanks David, and belated congratulations! :-) Thu, 04 Feb 2010 07:51:44 +0000 Security in the 20-teens https://lwn.net/Articles/372775/ https://lwn.net/Articles/372775/ happyaron <div class="FormattedComment"> The problem here is in our development procedure of most open source software, but I don't think it should be only point to the governmental attacks. Since not only the government can make an attack well funded, we cannot only emphasis such threats are only from government.<br> <p> It is clear that we cannot assure that every piece of code is clean, without deliberately injected harmful code, such shortcoming is due to our current development procedure, which make the freedom of everybody contributing to there favorite projects. Talking about hash attacks on DVCS maybe really useless on such an issue, as a previous comment has issued, generating a piece of code that still can work isn't a really easy thing with the same hash, and please don't forget there is still code reviews, which makes generating a workable, with security holes injected codes, even harder than only generating something with no other meaning but only have the same hash. Perhaps nobody can tell that any currently widely used hash algorithm (e.g. MD5, SHA1) is so weak that can be successfully cracked in this way easily. As for GPG, it also depends on hash algorithm, so talking about GPG other than hash maybe meaningless.<br> <p> I am not quite agree with the opinion that it is an alarm that national governmental attacks are just getting started from Google stating about a problem in China. Anyway Google hasn't claim that it is suffering attacks from the local government, but all the thing is the result our guess. But don't we agree that countries in the world with such power, or even some ones that are more powerful in this field, may already cracking their citizens data and monitoring their information? The problem is always a problem before it is fixed or proved not to be one, but making fusses about trifles is not needed at all.<br> </div> Thu, 04 Feb 2010 06:06:10 +0000 Countering the trusting trust attack https://lwn.net/Articles/372732/ https://lwn.net/Articles/372732/ paulj <div class="FormattedComment"> Thanks for your reply. Again, I stress that I appreciate the practical benefits <br> of your approach.<br> <p> I saw the caveat in the thesis about the trusted compiler-compiler only <br> needing to be trusted to compile the 1st full compiler. However, I am at a <br> loss to see how this trusted compiler (i.e. you inspected all possible relevant <br> source, or you wrote it) is different from Thompson's trusted compiler ("write <br> it yourself", see quote above).<br> <p> Your approach still rests in complete trust in one compiler, according to your <br> own proofs.<br> <p> See my other comment about how viruses have advanced from Thompson's <br> original attack, meaning that a subverted original compiler-compiler could <br> surely infect all other binaries ever touched by that code through, say, ELF <br> infections and hooking library calls.<br> <p> Anyway, I'll leave it there.<br> </div> Wed, 03 Feb 2010 23:40:55 +0000 Countering the trusting trust attack https://lwn.net/Articles/372734/ https://lwn.net/Articles/372734/ dwheeler <div class="FormattedComment"> <font class="QuotedText">&gt; The suggestion was -- and I think it is the only correct one -- that the compiler used to compile the compiler-compiler does not need to be compiled itself. If it does need to be compiled, the question remains: what compiler will you use to do that?</font><br> <p> As I discuss in the dissertation, malicious compilers must have triggers and payloads to produce subverted results. If you avoid their triggers and payloads, then it won't matter if they're malicious. For example, a malicious compiler cM may have triggers that affect compilations of its source code, but not for another compiler cQ. So you can use cM to compile the source code of cQ, even though cM is malicious, and have a clean result.<br> <p> (It's a little more complicated than that; see the dissertation for the gory details.)<br> <p> </div> Wed, 03 Feb 2010 23:36:05 +0000 Countering the trusting trust attack https://lwn.net/Articles/372728/ https://lwn.net/Articles/372728/ dwheeler <div class="FormattedComment"> Ummm... let me just say "read the paper, please" :-). I'm fully aware that compiling the same source with different compilers will (normally) produce different executables.<br> <p> <font class="QuotedText">&gt; Or are you suggesting that A-G and B-G then be used to again compile Gcc, and *those* binaries be compared? That would tell you that either A and B were not subverted, or were subverted in exactly the same way...</font><br> <p> That's the basic idea, sort of. Given certain preconditions, you can even recreate the original executable with a different starting compiler.<br> <p> </div> Wed, 03 Feb 2010 23:29:40 +0000 Countering the trusting trust attack https://lwn.net/Articles/372727/ https://lwn.net/Articles/372727/ dwheeler <div class="FormattedComment"> <font class="QuotedText">&gt; The work is nice, no doubt, but it still requires 1 absolutely trusted compiler, which would have to be written (or verified/assumed)...</font><br> <p> It does not have to be absolutely trusted, in the sense of being perfect on all possible inputs. It can be subverted, and/or have bugs, as long as it will compile the compiler-under-test without triggering a subversion or bug.<br> <p> <font class="QuotedText">&gt; Do you think the "Fully" in the title of your thesis is perhaps unfortunate though? Your work seems to re-enforce Thompson's result rather than fully counter it, surely?</font><br> <p> No, it's not unfortunate. It's intentional.<br> <p> Thompson's "trusting trust" attack is dead. Thompson correctly points out a problem with compilers and other lower-level components, but his attack presumes that you can't easily use some other system that acts as a *check* on the first. It's not just that you can recompile something with a different compiler; people noted that in the 1980s.<br> <p> A key is that DDC lets you *accumulate* evidence. If you want, you can use DDC 10 times, with 10 different trusted compilers; an attacker would have to subvert ALL TEN trusted compilers *AND* the original compiler-under-test executable to avoid detection. Fat chance.<br> <p> <p> </div> Wed, 03 Feb 2010 23:24:23 +0000 Quote candidate https://lwn.net/Articles/372674/ https://lwn.net/Articles/372674/ man_ls Maybe <a href="http://lwn.net/Articles/368271/">this one</a>? I don't see how it relates to AppArmor though. Wed, 03 Feb 2010 21:09:20 +0000 Countering the trusting trust attack https://lwn.net/Articles/372586/ https://lwn.net/Articles/372586/ Baylink <div class="FormattedComment"> I will admit up front to not having yet checked out your site, I'm at work just now. But if your test is "both compilers produce the same object code", then even both compilers *not* being subverted will not guarantee that.<br> <p> If I use compilers A and B to build G(cc), the A-G and B-G objects will not necessarily be byte-identical, and it doesn't *matter* what object they each in turn produce, because that would have to be am exhaustive search, which is impossible.<br> <p> Or are you suggesting that A-G and B-G then be used to again compile Gcc, and *those* binaries be compared? That would tell you that either A and B were not subverted, or were subverted in exactly the same way...<br> <p> but how are you authenticating your GCC sources?<br> <p> (If the answer is "read the damn paper, idiot", BTW, just say that. :-)<br> </div> Wed, 03 Feb 2010 17:50:59 +0000 Security in the 20-teens https://lwn.net/Articles/372555/ https://lwn.net/Articles/372555/ dlang <div class="FormattedComment"> yes, any checking the firewall does opens the firewall up to the possibility of errors (this includes the checking done in stateful packet filters)<br> <p> However, for all relatively sane protocols, there is checking that can be done that doesn't require as much code (and therefor doesn't have the risk) of the application code that will be processing the request. Properly done the code for the firewall is relatively static and can be well tested. It doesn't need to change every time you add a new function to the application (or change it's behavior), it only needs to be able to be configured to do different checking.<br> <p> Usually this can be things like (in order of increased complexity)<br> <p> checking that the message is well formed by the definition of the protocol<br> <p> checking that the message follows the protocol syntax<br> <p> checking that specific fields in the message are in a whitelist<br> <p> <p> Yes Wireshark has a horrible track record in security, but this sort of checking is happening in many firewalls (under names like 'deep packet inspection') for some protocols. There are also seperate 'Application Firewall' products you can get for some protocols. The better IDS/IPS systems do this sort of thing (as opposed to mearly blacklisting known exploits)<br> </div> Wed, 03 Feb 2010 16:41:28 +0000 Security in the 20-teens https://lwn.net/Articles/372513/ https://lwn.net/Articles/372513/ foom <div class="FormattedComment"> But, once the firewall is parsing application traffic, who's to say it doesn't have the security holes <br> just like the application does? (Wireshark certainly has its fair share of remote exploits, for <br> instance).<br> </div> Wed, 03 Feb 2010 13:37:12 +0000 Countering the trusting trust attack https://lwn.net/Articles/372501/ https://lwn.net/Articles/372501/ hppnq <em><blockquote>how can they trust YOUR compiler? </blockquote></em> <p> They can't, that's the principle of the Thompson attack. <p> <em><blockquote>One answer is to use your C-in-Forth compiler to compile the original compiler source code (say GCC), then use THAT compiler executable to compile the original compiler source code again.</blockquote></em> <p> The suggestion was -- and I think it is the only correct one -- that the compiler used to compile the compiler-compiler does not need to be compiled itself. If it <em>does</em> need to be compiled, the question remains: what compiler will you use to do that? <p> <em><blockquote>the resulting executable should be exactly the same as your original executable. Once you've shown that they are equal, then that means either both were subverted in the same way, OR that the original executable isn't subverted.</blockquote></em> <p> But can you tell which conclusion is the right one without having to assume that the original executable was not subverted in the first place? It seems to me that a meaningful conclusion can be drawn only when the two executables are <em>not</em> the same, so you can positively identify a subverted compiler. <p> Wed, 03 Feb 2010 13:25:12 +0000 Security in the 20-teens https://lwn.net/Articles/372503/ https://lwn.net/Articles/372503/ paulj <div class="FormattedComment"> Another thing to consider:<br> <p> There's nothing to stop the author of an compiler subverting its binaries such <br> that *generally* infects all binaries it touches, such that those binaries then <br> infect all other binaries they touch (e.g. by hooking open), and this infection <br> could also introduce system-binary specific attacks as/when it detected it <br> was running as part of those programmes.<br> <p> Thinking in terms of a compiler specifically looking for login is ignoring the huge <br> advances made in virus design since Thompson wrote his.<br> <p> I.e. in this discussion we're assuming DDC means you need to subvert 2 <br> compilers. However that's not the case, nor is it even supported by the <br> thesis being discussed.<br> <p> Anyway.<br> </div> Wed, 03 Feb 2010 12:33:01 +0000 Countering the trusting trust attack https://lwn.net/Articles/372471/ https://lwn.net/Articles/372471/ paulj <div class="FormattedComment"> Hi,<br> <p> I've replied to Nix. The work is nice, no doubt, but it still requires 1 absolutely <br> trusted compiler, which would have to be written (or verified/assumed), as I <br> think you note. No doubt the work could be extended such that Ct is a set of <br> compilers.<br> <p> Do you think the "Fully" in the title of your thesis is perhaps unfortunate <br> though? Your work seems to re-enforce Thompson's result rather than fully <br> counter it, surely?<br> </div> Wed, 03 Feb 2010 08:53:21 +0000 Security in the 20-teens https://lwn.net/Articles/372467/ https://lwn.net/Articles/372467/ paulj <div class="FormattedComment"> You've restated what I wrote, hence I don't disagree with you.<br> <p> Note that we don't know how hard this bar would be. There are things like <br> 'clumping' of expertise, such that that in any specialised area in a technical <br> field the people working in it tend to be drawn from a much smaller group than <br> the set of all people qualified in the field. I.e. the set people who *write* <br> compiler A are less independent from those who author compiler B. Hence <br> your assumption that the attacker would have to *hack* into the other <br> compiler is unsafe. Rather they could simply transition from working on A to B, <br> either as part of their normal career progression or at least seemingly so.<br> <p> Next, as dwheeler also notes in his paper, it may be hard to obtain another <br> unsubverted compiler. Indeed, looking carefully at his work it seems his proofs <br> specifically require 1 compiler-compiler that can be absolutely trusted to <br> compile the general compiler correctly, as the starting point of the process. (I <br> thought at first that perhaps a set was sufficient, such that you didn't have <br> to know which compiler was trustable, as long as you could be confident at <br> least one compiler was). See the first sentence of 8.3 in his thesis, and the <br> multiple discussions of the role of a trusted compiler in the DDC process.<br> <p> So this still seems to boil down to "you have to write (or verify all the source) <br> of your compiler in order to really be able to trust it".<br> <p> I'm not poo-poo'ing the work per se, just saying this good work is slightly <br> marred by the overly grand claim made in its title.<br> </div> Wed, 03 Feb 2010 08:46:03 +0000