A GCC -fstack-protector vulnerability on arm64
A GCC -fstack-protector vulnerability on arm64
Posted Sep 13, 2023 15:01 UTC (Wed) by geofft (subscriber, #59789)In reply to: A GCC -fstack-protector vulnerability on arm64 by PengZheng
Parent article: A GCC -fstack-protector vulnerability on arm64
(Even if the program is processing trusted input - e.g., it's part of something like 'make' whose purpose is running code anyway, and so there cannot really be security vulnerabilities - there still isn't a point in having it conditionally overrun the stack and crash. Just detect when the inputs are too big and conditionally throw an error at the beginning of the program. The effect for the end user is no worse, and probably a bit better really.)
This scenario only makes sense, I think, if you can somehow guarantee that when A makes a large VLA, B and C definitely will not, etc. But I'm having trouble thinking of how you'd end up with code like that. Most of the time, if you are processing large input in one function and call another, that second function is going to also process large input too, or at best process data of constant size. It isn't going to get smaller.
Maybe your logic sometimes does lots of work in B, and sometimes lots of work in C instead, but only in one or the other? But you can solve that by just creating a stack array in B (or A) and passing a pointer to it down to C, instead of doing another allocation in C. Pointers to stack variables remain valid as long you're somewhere deeper on the stack.
Posted Sep 16, 2023 7:02 UTC (Sat)
by ssmith32 (subscriber, #72404)
[Link]
For most programs, having a very rare, badly performing worst case is better than having an always occurring worst case that isn't quite as bad, but is still a bit worse than the common case in the "rarely very bad" scenario.
See: usage of quicksort O(n^2) vs mergesort O(nlgn).
Since quicksort is *usually* faster, it often is the better choice, despite having a much, much worse worst case.
In fact, if one always just allocated the worst case statically, there'd actually be no point for heap memory whatsoever - just allocate for the worst case, for anything.
>There is no benefit in converting a program that unconditionally overruns the stack to one that conditionally overruns the stack.
Yes, there is: if the condition is very rare, of course it's far far better to have a program that only (rarely) conditionally overruns the stack, instead of one that always overruns it. The only benefit of having a program that always runs out of memory is if you're selling memory (or trying to convince someone who needs 99.9999% uptime that the worst case will crash on the given hardware, i.e. as a test program)
In fact, the space of programs where it's not preferred to rarely crash instead of always crash is rather small indeed, I would imagine.
A GCC -fstack-protector vulnerability on arm64