Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)
Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)
Posted Jan 28, 2015 2:19 UTC (Wed) by spender (guest, #23067)Parent article: Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)
It's rather telling actually that Qualys only managed during that time to pull off a data-only attack against exim in a non-default configuration using a previously-published technique (http://www.rapid7.com/db/modules/exploit/unix/smtp/exim4_...). Exim didn't learn its lesson from last time and continues to keep data related to what commands to execute at runtime in a persistent writable buffer, ripe for abuse. There are many conditions on exploitation of the vulnerability, how it can be triggered, how big of an overflow it is, and the permitted contents of the overflowed amount. In a follow-up to the advisory, Qualys lists software they tried but (unfortunately, for their PR purposes) failed to exploit: http://www.openwall.com/lists/oss-security/2015/01/27/18 .
The vulnerability was also fixed a year and a half ago but didn't make its way to the majority of distros, probably due to "Linus-style" disclosure: https://sourceware.org/ml/libc-alpha/2013-01/msg00809.html . Though some did (eventually) identify it correctly as a vulnerability fix: https://chromium.googlesource.com/chromiumos/overlays/chr...
But readers here and elsewhere will see "highly critical", overreact as expected from Qualys' PR team, and learn nothing from the entire event. Carry on!
-Brad
Posted Jan 28, 2015 3:17 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
Was it intentionally suppressed that it was a security fix (though "fix size check" is certainly a trigger for such things to me) and not commented on? Also, why did Google not inform upstream they needed to backport the patch in April (they didn't ask for a CVE either)?
Posted Jan 29, 2015 6:34 UTC (Thu)
by siddhesh (guest, #64914)
[Link]
Posted Jan 30, 2015 4:07 UTC (Fri)
by Comet (subscriber, #11646)
[Link] (9 responses)
It has a buffer, this is pretty essential to reading data over a network, the octets have to go somewhere in memory. The buffer takes the SMTP commands, the buffer is scanned for SMTP commands, this is all inherent to a text-based network protocol. Exim has done better than some others, by doing things like explicitly switching to a new buffer when there's a new security context, such as establishing TLS, which has avoided some vulnerabilities.
I'd appreciate guidance for what you think should be done differently with Exim's buffer management here, rather than a vague handwavy rant "it uses a writeable buffer!?!!".
Thanks.
Posted Jan 30, 2015 4:41 UTC (Fri)
by spender (guest, #23067)
[Link] (8 responses)
We all pretty well understand by now that having writable code is a bad idea. But it's not just machine code that's the problem -- a recent example was the Linux kernel's BPF interpreter buffers. By corrupting these buffers (an unprivileged user could easily spray the kernel memory with them and trigger adjacent object overwrites to corrupt them) one could achieve easy arbitrary read and write of kernel memory. The enhanced BPF made it even worse, by allowing arbitrary function execution with such corruptions. To implement such interpreter buffers securely, particularly when they don't need to be written to at runtime, is to map them as read-only. After I complained about it, this is how it was fixed (though I'm still not fully happy with the implementation).
Another example, Heartbleed: sensitive information was located in the normal heap, subjecting it to any potential linear infoleak of an adjacent object. Moving that to its own mapping was a suggested security improvement.
Likewise in the case of Exim, it's holding data parsed from its config file that doesn't need to be modified at runtime that's been repeatedly targeted in exploits. Of this configuration information (as I mentioned in my post but you chose to ignore for whatever reason) is parsed data having to do with what commands to run on the system at runtime. If you bothered to look at the rapid7 link I posted, it'll show a specific abuse of Exim's ACLs. Once this data is parsed, it shouldn't need to be modified at runtime, so as modification of this data provides an attacker with arbitrary command execution, just as with machine code it should be made read-only. At the very least it should not be in the general heap and ambiently writable. If it does need to be modified infrequently at runtime, that can be handled as well via temporary mprotects -- it will still be much better than the current state of affairs.
Next time try actually reading the post you're replying to, as you could have learned all this information on your own. This is all really basic security hardening stuff.
Thanks,
Posted Jan 30, 2015 7:39 UTC (Fri)
by kleptog (subscriber, #1183)
[Link] (6 responses)
Ok, learning moment here. As far as I know there is no API which can make a block of memory permanently read-only. You can mprotect() it but the attacker could just make it writeable again with another mprotect() call.
Unless you're saying that in this special case because no code execution was possible, the attacker would have no opportunity to run mprotect() and so the attacker would have to look harder to find a way to abuse this problem to run code.
Is that right?
To get any complicated data structure in one place so you can make it read-only would amount to using a different allocator. Writing your own custom memory allocator seems like a bad idea as it would likely have its own bugs.
I see that glibc has something called obstacks which might be appropriate. But it's not clear from the documentation as it's not written from a security point of view.
Posted Jan 30, 2015 12:58 UTC (Fri)
by spender (guest, #23067)
[Link] (5 responses)
The point is that these days remote code execution is difficult to achieve reliably (especially on a wide scale) due to NX/ASLR/etc. But those defenses can't help against data-only attacks like the one pulled off against Exim, where the Exim code is executing in its original order but with corrupted data that here is causing it to execute arbitrary commands on the system.
So your second paragraph is correct.
-Brad
Posted Jan 30, 2015 13:15 UTC (Fri)
by spender (guest, #23067)
[Link] (4 responses)
-Brad
Posted Feb 1, 2015 19:25 UTC (Sun)
by nix (subscriber, #2304)
[Link] (3 responses)
Posted Feb 1, 2015 20:46 UTC (Sun)
by PaXTeam (guest, #24616)
[Link] (2 responses)
Posted Feb 1, 2015 23:58 UTC (Sun)
by zlynx (guest, #2285)
[Link] (1 responses)
If so, there's a perfect example of security destroying performance, because the CPU branch prediction has already determined the jump location of the RET long before it reaches the end of the function. And then moving the stack yanks that away and is going to cause a big pipeline bubble.
Posted Feb 2, 2015 0:25 UTC (Mon)
by PaXTeam (guest, #24616)
[Link]
Posted Jan 31, 2015 1:19 UTC (Sat)
by Comet (subscriber, #11646)
[Link]
The real issue is that for too long, C has been the only practical language for portable Unix systems software development, and the degree of care required to prevent problems such as off-by-one errors tramping memory elsewhere approaches superhuman. It's been 30+ years and we're still discovering issues in base BSD code. If I were starting an MTA project from scratch, instead of helping maintain one, I damned well wouldn't write it in C. Heck, on some systems, we can't even trust the base system services library. ;)
Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)
Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)
Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)
Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)
-Brad
Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)
Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)
Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)
Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)
Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)
Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)
Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)
Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)