|
|
Subscribe / Log in / New account

Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)

Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)

Posted Jan 28, 2015 2:19 UTC (Wed) by spender (guest, #23067)
Parent article: Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)

The effective impact of this vulnerability is not "highly critical", though you'd have to really dig into the details of the actual advisory and not base your opinion off what Qualys' PR team have been building up for over 4 months.

It's rather telling actually that Qualys only managed during that time to pull off a data-only attack against exim in a non-default configuration using a previously-published technique (http://www.rapid7.com/db/modules/exploit/unix/smtp/exim4_...). Exim didn't learn its lesson from last time and continues to keep data related to what commands to execute at runtime in a persistent writable buffer, ripe for abuse. There are many conditions on exploitation of the vulnerability, how it can be triggered, how big of an overflow it is, and the permitted contents of the overflowed amount. In a follow-up to the advisory, Qualys lists software they tried but (unfortunately, for their PR purposes) failed to exploit: http://www.openwall.com/lists/oss-security/2015/01/27/18 .

The vulnerability was also fixed a year and a half ago but didn't make its way to the majority of distros, probably due to "Linus-style" disclosure: https://sourceware.org/ml/libc-alpha/2013-01/msg00809.html . Though some did (eventually) identify it correctly as a vulnerability fix: https://chromium.googlesource.com/chromiumos/overlays/chr...

But readers here and elsewhere will see "highly critical", overreact as expected from Qualys' PR team, and learn nothing from the entire event. Carry on!

-Brad


to post comments

Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)

Posted Jan 28, 2015 3:17 UTC (Wed) by mathstuf (subscriber, #69389) [Link] (1 responses)

> The vulnerability was also fixed a year and a half ago but didn't make its way to the majority of distros, probably due to "Linus-style" disclosure

Was it intentionally suppressed that it was a security fix (though "fix size check" is certainly a trigger for such things to me) and not commented on? Also, why did Google not inform upstream they needed to backport the patch in April (they didn't ask for a CVE either)?

Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)

Posted Jan 29, 2015 6:34 UTC (Thu) by siddhesh (guest, #64914) [Link]

I don't think it was intentional. It wasn't obvious then that the bug had security implications. There was also an extensive exercise in the last year by Florian Weimer to mark all bugs in upstream glibc with a flag indicating whether it was a security bug and even in that audit the bug didn't come up as having security implications.

Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)

Posted Jan 30, 2015 4:07 UTC (Fri) by Comet (subscriber, #11646) [Link] (9 responses)

What, precisely, do you feel that Exim is doing wrong here?

It has a buffer, this is pretty essential to reading data over a network, the octets have to go somewhere in memory. The buffer takes the SMTP commands, the buffer is scanned for SMTP commands, this is all inherent to a text-based network protocol. Exim has done better than some others, by doing things like explicitly switching to a new buffer when there's a new security context, such as establishing TLS, which has avoided some vulnerabilities.

I'd appreciate guidance for what you think should be done differently with Exim's buffer management here, rather than a vague handwavy rant "it uses a writeable buffer!?!!".

Thanks.

Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)

Posted Jan 30, 2015 4:41 UTC (Fri) by spender (guest, #23067) [Link] (8 responses)

I thought what I said was pretty clear and not at all an abstract complaint about generic "writable buffers" as you claim. But ok, play the fool, let me spell it out for you.

We all pretty well understand by now that having writable code is a bad idea. But it's not just machine code that's the problem -- a recent example was the Linux kernel's BPF interpreter buffers. By corrupting these buffers (an unprivileged user could easily spray the kernel memory with them and trigger adjacent object overwrites to corrupt them) one could achieve easy arbitrary read and write of kernel memory. The enhanced BPF made it even worse, by allowing arbitrary function execution with such corruptions. To implement such interpreter buffers securely, particularly when they don't need to be written to at runtime, is to map them as read-only. After I complained about it, this is how it was fixed (though I'm still not fully happy with the implementation).

Another example, Heartbleed: sensitive information was located in the normal heap, subjecting it to any potential linear infoleak of an adjacent object. Moving that to its own mapping was a suggested security improvement.

Likewise in the case of Exim, it's holding data parsed from its config file that doesn't need to be modified at runtime that's been repeatedly targeted in exploits. Of this configuration information (as I mentioned in my post but you chose to ignore for whatever reason) is parsed data having to do with what commands to run on the system at runtime. If you bothered to look at the rapid7 link I posted, it'll show a specific abuse of Exim's ACLs. Once this data is parsed, it shouldn't need to be modified at runtime, so as modification of this data provides an attacker with arbitrary command execution, just as with machine code it should be made read-only. At the very least it should not be in the general heap and ambiently writable. If it does need to be modified infrequently at runtime, that can be handled as well via temporary mprotects -- it will still be much better than the current state of affairs.

Next time try actually reading the post you're replying to, as you could have learned all this information on your own. This is all really basic security hardening stuff.

Thanks,
-Brad

Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)

Posted Jan 30, 2015 7:39 UTC (Fri) by kleptog (subscriber, #1183) [Link] (6 responses)

> Once this data is parsed, it shouldn't need to be modified at runtime, so as modification of this data provides an attacker with arbitrary command execution, just as with machine code it should be made read-only.

Ok, learning moment here. As far as I know there is no API which can make a block of memory permanently read-only. You can mprotect() it but the attacker could just make it writeable again with another mprotect() call.

Unless you're saying that in this special case because no code execution was possible, the attacker would have no opportunity to run mprotect() and so the attacker would have to look harder to find a way to abuse this problem to run code.

Is that right?

To get any complicated data structure in one place so you can make it read-only would amount to using a different allocator. Writing your own custom memory allocator seems like a bad idea as it would likely have its own bugs.

I see that glibc has something called obstacks which might be appropriate. But it's not clear from the documentation as it's not written from a security point of view.

Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)

Posted Jan 30, 2015 12:58 UTC (Fri) by spender (guest, #23067) [Link] (5 responses)

If the attacker could make it writable again, then it means they've already achieved arbitrary code execution or something close to it, which also means they likely could have executed system() instead of mprotect().

The point is that these days remote code execution is difficult to achieve reliably (especially on a wide scale) due to NX/ASLR/etc. But those defenses can't help against data-only attacks like the one pulled off against Exim, where the Exim code is executing in its original order but with corrupted data that here is causing it to execute arbitrary commands on the system.

So your second paragraph is correct.

-Brad

Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)

Posted Jan 30, 2015 13:15 UTC (Fri) by spender (guest, #23067) [Link] (4 responses)

One small nitpick about your second paragraph though -- this isn't a special case, but rather the rule. Achieving code execution through memory corruption always begins with writes. The only possible target of those writes is memory that is writable.

-Brad

Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)

Posted Feb 1, 2015 19:25 UTC (Sun) by nix (subscriber, #2304) [Link] (3 responses)

Given the existence of ret-to-libc attacks, and the way they generally start (with an overflow of something on the stack) I'd be fascinated to know how you propose to make the stack non-writable.

Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)

Posted Feb 1, 2015 20:46 UTC (Sun) by PaXTeam (guest, #24616) [Link] (2 responses)

you don't make the stack read-only but store the sensitive information (such as the return address) elsewhere, in (most of the time) read-only storage. in practice that's either the shadow stack approach or more recently CPS/CPI (http://levee.epfl.ch/). or you can keep the sensitive information exposed to attacks but verify the code pointer targets on dereference (control flow integrity at various levels of granularity).

Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)

Posted Feb 1, 2015 23:58 UTC (Sun) by zlynx (guest, #2285) [Link] (1 responses)

So how are they doing that on x86 processors? Rewriting the stack pointer just before function exit, or something?

If so, there's a perfect example of security destroying performance, because the CPU branch prediction has already determined the jump location of the RET long before it reaches the end of the function. And then moving the stack yanks that away and is going to cause a big pipeline bubble.

Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)

Posted Feb 2, 2015 0:25 UTC (Mon) by PaXTeam (guest, #24616) [Link]

it's all in the paper and the source code (a rare example as far as academic research goes). my understanding is that the primary/main stack pointer is kept as is, unsafe data (think buffers, etc) are moved onto a secondary (unsafe) stack instead. that's not to say that there aren't open questions, e.g., how they can avoid having fat pointers *and* halving the available address space...

Highly critical “Ghost” allowing code execution affects most Linux systems (Ars Technica)

Posted Jan 31, 2015 1:19 UTC (Sat) by Comet (subscriber, #11646) [Link]

I read it all, three times, and reviewed the linked article, before replying. You may know what you meant with what you said, but you omitted much. I thought that _maybe_ you were referring to ability to edit the data pulled from the config files, but couldn't see how you thought that helped with a design where the config file is parsed into variables which hold the individual settings. Now, some of those are subject to re-expansion, and it might be useful to adjust the memory pool used for allocating storage for all stuff coming from a config file, so that it can transition to read-only at a later point, but you're into seriously diminishing returns here: you're tackling one subset of potential avenues of attack, once an attacker can overwrite memory, but really once memory overwriting has happened, you're fighting a battle you've already lost. Yes, it's reductively true that mprotecting away access is "better", but I don't think it's productive as anything more than barely-above-theatre.

The real issue is that for too long, C has been the only practical language for portable Unix systems software development, and the degree of care required to prevent problems such as off-by-one errors tramping memory elsewhere approaches superhuman. It's been 30+ years and we're still discovering issues in base BSD code. If I were starting an MTA project from scratch, instead of helping maintain one, I damned well wouldn't write it in C. Heck, on some systems, we can't even trust the base system services library. ;)


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds