Spengler: False Boundaries and Arbitrary Code Execution
Posted Jan 6, 2011 13:54 UTC (Thu) by spender (subscriber, #23067)
stackbuf[attacker_controlled_index] = maybe_attacker_controlled_value;
SSP doesn't actually protect any function pointers or saved instruction pointers. It places a cookie it expects to be overwritten in the case of a linear stack overflow that it can check in function epilogues.
A proper ASLR implementation is a more useful mitigation (it helps even in the example above), though it too is of limited use in the presence of an additional info-leak vulnerability. It should be mentioned that SSP is also of limited use in the presence of the same vulnerability (leaking of the random cookie).
No one has real, deterministic protection against ret2libc (yet). Lest I be accused of FUD again by someone in an effort to drive up PaX usage, I won't mention who will be the first to implement this technology.
Posted Jan 6, 2011 16:29 UTC (Thu) by cesarb (subscriber, #6266)
For instance, if a processor has a separate hardware return address stack, pushes/pops from it on call/return, and needs a special instruction (not the normal memory load/store instructions) to manipulate it directly, it becomes much harder to manipulate the return address of normal code (you would only be able to manipulate the return address of code which does nasty control flow manipulations, and only if such code reads from somewhere which you can write to).
I can even see how to implement this idea in userspace on common hardware (but in a way no one would do since it would be too slow):
* The kernel makes available a stack in its memory which is not accessible to userspace, and provides system calls to read and write it
* The compiler, in each function's prologue, tells the kernel to push the return address (read from the stack or the return address register, depending on the architecture)
* The compiler, in each function's epilogue, just before the return instruction, asks the kernel for the return address and writes it to where the hardware expects it to be (so the return instruction will read it)
This could even be done within the kernel (and be even slower), by having the special return address stack be on a page which is mapped just before reading/writing and unmapping it from the page tables afterwards.
Posted Jan 6, 2011 16:41 UTC (Thu) by cesarb (subscriber, #6266)
Simply xor the return address in the prologue with a random number, and xor it back just before the return instruction. Two loads, a xor, and a store in the prologue, and again two loads, a xor, and a store in the epilogue. You can save one load if the random value is in a register (but then you add register pressure on x86-32). Should convert a return-to-libc into a jump to a random address, which should be quite effective on 64-bit architectures with lots of unmapped land. As long as the attacker cannot *read* the random number, of course, but it is yet another speed bump. It would be even more secure if the random value was always on a register and never saved to memory.
Posted Jan 6, 2011 19:19 UTC (Thu) by spender (subscriber, #23067)
Posted Jan 6, 2011 22:21 UTC (Thu) by cesarb (subscriber, #6266)
My idea was a bit different; instead of XOR with an ASLR-randomized stack pointer, it would XOR with a cookie read from a global variable (initialized to a random number on a global constructor). So leaking the stack pointer would not be enough, you would need a leak of either the cookie or an obfuscated pointer (which you would then XOR with the expected unobfuscated pointer to recover the cookie). And, as a bonus, it does something useful even without ASLR enabled.
But what to XOR with is only a small detail (and a local decision even, since it is completely contained within each function, so different parts of the same program can XOR with values obtained in different ways even); the main idea, which is to XOR the return address in the stack, is the same both in my comment above and in your link ;-) . I completely forgot about the frame pointer, however (your link didn't).
The main problem with this idea is that it could break GDB badly (as mentioned in the link PaXTeam posted), unless an extension to the debugging format was developed to tell GDB where to find the cookie and which functions need it. Of course, the user can simply zero the cookie within gdb before debugging the program, to prevent the values from being obfuscated.
Posted Jan 7, 2011 21:02 UTC (Fri) by PaXTeam (guest, #24616)
search google for the following titles/keywords:
"Embedded Firmware Diversity for Smart Electric Meters"
"Hardware and Binary Modification Support for Code Pointer Protection From Buffer Overflow"
"G-Free: Defeating Return-Oriented Programming through Gadget-less Binaries"
"HyperSafe: A Lightweight Approach to Provide Lifetime Hypervisor Control-Flow Integrity"
"Preventing memory error exploits with WIT"
"Control-Flow Integrity Principles, Implementations, and Applications" (in general, MSR's gleipnir project and the related papers)
"Automated Detection of Persistent Kernel Control-Flow Attacks"
of course this is just a small selection, this area of research goes back to decades (no, it didn't start in security ;-).
Posted Jan 9, 2011 19:12 UTC (Sun) by nix (subscriber, #2304)
Posted Jan 6, 2011 20:00 UTC (Thu) by PaXTeam (guest, #24616)
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds