LWN.net Logo

4K stacks by default?

By Jake Edge
April 23, 2008

The kernel stack is a rather important chunk of memory in any Linux system. The unpleasant kernel memory corruption that results from overflowing it is something that is to be avoided at all costs. But the stack is allocated for each process and thread in the system, so those who are looking to reduce memory usage target the 8K stack used by default on x86. In addition, an 8K stack requires two physically contiguous pages (an "order 1" allocation) which can be difficult to satisfy on a running system due to fragmentation.

Linux has had optional support for 4K stacks for nearly four years now, with Fedora and RHEL enabling it on the kernels they ship, but a recent patch to make it the default for x86 has raised some eyebrows. Andrew Morton sees it as bypassing the normal patch submission process:

This patch will cause kernels to crash.

It has no changelog which explains or justifies the alteration.

afaict the patch was not posted to the mailing list and was not discussed or reviewed.

It is not surprising that patch author Ingo Molnar sees things a little differently:

what mainline kernels crash and how will they crash? Fedora and other distros have had 4K stacks enabled for years [ ... ] and we've conducted tens of thousands of bootup tests with all sorts of drivers and kernel options enabled and have yet to see a single crash due to 4K stacks. So basically the kernel default just follows the common distro default now. (distros and users can still disable it)

As described in an earlier LWN article, the main concerns about only providing 4K for the kernel stack are for complicated storage configurations or for people using NDISwrapper. There is fairly high disdain for the latter case—as it is done to load proprietary Windows drivers into the kernel—but it could lead to a pretty hideous failure in the former. Data corruption certainly seems like a possibility, but, regardless, a kernel crash is definitely not what an administrator wants to have to deal with.

Arjan van de Ven summarized the current state, noting that NDISwrapper really requires 12K stacks, so having 8K only makes it less likely those kernels will crash. The stacking of multiple storage drivers (network filesystems, device mapper, RAID, etc.) is a bigger issue:

we need to know which they are, and then solve them, because even on x86-64 with 8k stacks they can be a problem (just because the stack frames are bigger, although not quite double, there).

Proponents of default 4K stacks seem to be puzzled why there is objection to the change since there have been no problems with Red Hat kernels. But Andi Kleen notes:

One way they do that is by marking significant parts of the kernel unsupported. I don't think that's an option for mainline.

The xfs filesystem, which is not supported in RHEL or Fedora, can potentially use a great deal of stack. This leads some kernel hackers to worry that a complicated configuration that uses it, an "nfs+xfs+md+scsi writeback" configuration as Eric Sandeen puts it, could overflow. Work is already proceeding to reduce the xfs stack usage, but it clearly is a problem that xfs hackers have seen. David Chinner responds to a question about stack overflows:

We see them regularly enough on x86 to know that the first question to any strange crash is "are you using 4k stacks?". In comparison, I have never heard of a single stack overflow on x86_64....

It would seem premature to make 4K stacks the default. There is good reason to believe that folks using xfs could run into problems. But there is a larger issue, one that Morton brought up in his initial message, then reiterated later in the thread:

Anyway. We should be having this sort of discussion _before_ a patch gets merged, no?

The memory savings can be significant, especially in the embedded world. Coupled with the elimination of order 1 allocations each time a process gets created, there is good reason to keep working toward 4K stacks by default. As of this writing, the default remains for 4K stacks in Linus's tree, but that could change before long.


(Log in to post comments)

4K stacks by default?

Posted Apr 24, 2008 4:17 UTC (Thu) by bronson (subscriber, #4806) [Link]

Is four years of talking about 4K stacks enough?  You can prepare and prepare for decades and
still not be certain you've caught everything.  There comes a time when you just the switch
and fix anything that breaks.  I hope that time is soon.

Is there some way to allocate a 12K chunk and use it as the stack when ndiswrapper calls into
Windows code?  Seems easy enough to me, but I come from a day when kernels and memory
architectures were a LOT simpler.  :)


4K stacks by default?

Posted Apr 24, 2008 12:42 UTC (Thu) by pr1268 (subscriber, #24648) [Link]

Is four years of talking about 4K stacks enough? You can prepare and prepare for decades and still not be certain you've caught everything. There comes a time when you just the switch and fix anything that breaks. I hope that time is soon.

Having read Jake's article, I was under the impression that the bigger issue wasn't about the impact of defaulting to 4K stacks, but rather how the patch submission showed a total disregard for the following the established procedure. IMO this smacks of some ulterior motive of trying to "sneak" by the senior kernel developers, especially given the controversy of the patch. But then again, I often entertain harebrained conspiracy theories, so don't mind me. ;-)

I do agree with your comment, though.

Here's a different, more constructive conspiracy theory: Perhaps the submitter knew full well that this patch would be caught despite the clandestine technique used, and he/she wanted to stimulate a discussion on defaulting to 4K stacks--after all, four years is a long time to keep this patch in mainline only to have it disabled by default. Of course, this wouldn't explain why the submitter didn't just announce the change and ask for comments on the LKML...

I've run 4K stacks on vanilla kernels for several years now without any issues, even with proprietary NVIDIA graphics drivers (I know, I know!). I do remember having to keep 8K stacks on my laptop prior to 2.6.17 with a Broadcomm Wifi card and NDISWrapper (need I say more?).

4K stacks by default?

Posted Apr 24, 2008 23:01 UTC (Thu) by dvdeug (subscriber, #10998) [Link]

Why does this switch need to be done? If 8k stacks have worked for years, then they should be
fine at least until the last x86 desktop/server is as common as Vaxen are now. Why not leave
it as an option for those who really need it?

4K stacks by default?

Posted Apr 24, 2008 23:56 UTC (Thu) by zlynx (subscriber, #2285) [Link]

I believe the RHEL support engineers were finding systems with mysterious fork/clone failures
that were caused by the kernel not being able to find 8K of continuous memory.  It's really
easy to allocate 4K since it's the i386 page size, but two pages next to each other can fail.
Big Java programs using a lot of threads would fail to get a new thread.  Apache servers would
fail to spawn a new child.  Etc.

However, since then (2.6.16?) the memory system has also been reworked a bunch and I don't
know if it's still such a problem to get a 8K alloc.

You *would* think those big programs would now be running on x86_64 systems with the 8K stacks
and having the same problems, if they still existed.  Or maybe they get around it by
installing 16 GB RAM instead.

4K stacks by default?

Posted Apr 24, 2008 10:28 UTC (Thu) by scarabaeus (subscriber, #7142) [Link]

I've always wondered: Would it be that difficult to fault in additional stack pages on demand,
so the stack can grow as needed? Apparently there are good reasons why this is not possible -
can someone explain them?

4K stacks by default?

Posted Apr 24, 2008 10:50 UTC (Thu) by MathFox (guest, #6104) [Link]

The page fault code needs stack too...

IIRC the Linux developers made the explicit decision that Kernel code and dara will always be
in RAM; a page fault from kernel code is a reason to panic. If you want to make kernel code
and data "demand pagable" you must take care that all code (and data) needed for paging in the
swapped out kernel pages is locked in RAM. "The data I need to load this page is only
available in the swap." Linux systems can swap to a local file system or over the network, a
lot of code (and data) would have to be locked to keep the system running. The kernel gurus
decided that 100% was far easier to manage.

4K stacks by default?

Posted Apr 25, 2008 3:02 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

I think a more fundamental problem with paging in extra stack when you need it is that for a stack to work, it has to be in contiguous address space. The addresses past the end of the stack aren't available when you need them.

I believe address space is a more scarce resource than physical memory on many systems these days.

4K stacks by default?

Posted Apr 25, 2008 18:11 UTC (Fri) by scarabaeus (subscriber, #7142) [Link]

Thanks for your comments! :-) But I still don't understand...

I'm not proposing to swap out kernel stack pages. Instead, I'm wondering why it isn't possible
to just allocate additional memory pages for the stack the moment a page fault happens because
the currently allocated stack overflows.

This assumes that it is possible to just map in additional pages. Why does the stack have to
be in contiguous memory - is it addressed via its physical address?

If so, is the cost of setting up virtual page mapping too high? The event that new stack pages
would have to be allocated would be very rare, so it wouldn't have to be fast...

4K stacks by default?

Posted Apr 25, 2008 20:41 UTC (Fri) by nix (subscriber, #2304) [Link]

If a page fault happens, you might need to swap pages out in order to 
satisfy the request for an additional page. You might think you could just 
use GFP_ATOMIC allocation for this, but the pages have to be contiguous 
(which might involve memory motion and swapping on its own), and if a lot 
of processes all need extra stack at once you'll run short on the free (-> 
normally wasted) memory available for GFP_ATOMIC allocations.

4K stacks by default?

Posted Apr 25, 2008 21:31 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

Why does the stack have to be in contiguous memory - is it addressed via its physical address?

Contiguous virtual memory. That's what I meant by the address space being the scarce resource. We can afford to allocate 4K of virtual addresses when the process is created, but we can't afford to allocate 8K of them even if the 2nd 4K aren't mapped to physical memory until needed.

With a different memory layout, Linux might not have that problem. Some OSes put the kernel stack in a separate address space for each process. But Linux puts all of the kernel memory, including every process' stack, in all the address spaces. So even if the kernel were pageable, there would still be a virtual address allocation problem.

4K stacks by default?

Posted Apr 24, 2008 12:02 UTC (Thu) by nix (subscriber, #2304) [Link]

have yet to see a single crash due to 4K stacks
I guess no Fedora users use LVM and pcdrw burners at the same time, then (no need to LVM the CD-RW, just using it is enough) as until 2.6.25 that case was blowng the 4K stack.

There are still plenty of stack blowers out there :/

4K stacks by default?

Posted Apr 24, 2008 15:51 UTC (Thu) by jzbiciak (✭ supporter ✭, #5246) [Link]

I do find it interesting (though not terribly surprising) that x86-64 treads more lightly on
the stack than x86.  My initial inclination is that there are two factors at play:  x86-64
should spill a whole heck of a lot less, and x86-64 passes more arguments in registers.

Anyone here have any thoughts?

4K stacks by default?

Posted Apr 24, 2008 17:44 UTC (Thu) by proski (subscriber, #104) [Link]

From Linux 2.6.25, file include/asm-x86/page_64.h:
#define THREAD_ORDER    1
#define THREAD_SIZE  (PAGE_SIZE << THREAD_ORDER)
This looks like 8k to my untrained eye.

4K stacks by default?

Posted Apr 24, 2008 18:28 UTC (Thu) by jzbiciak (✭ supporter ✭, #5246) [Link]

Currently both x86 and x86-64 have 8K stacks by default as I recall. That wasn't what I was talking about. I was referring to this comment in the original article:

We see them regularly enough on x86 to know that the first question to any strange crash is "are you using 4k stacks?". In comparison, I have never heard of a single stack overflow on x86_64....

That's just a general statement that suggests x86-64 places less demand on the stack than x86.

4K stacks by default?

Posted Apr 24, 2008 21:48 UTC (Thu) by proski (subscriber, #104) [Link]

Please check your logic.  That suggests that x86_64 is significantly less likely to run out of
8k than i386 out of 4k.

But if you are right about reduced usage of stack for automatic variables and parameter
passing, it means that 4k stacks could be attempted on x86_64.

4K stacks by default?

Posted Apr 24, 2008 22:44 UTC (Thu) by jzbiciak (✭ supporter ✭, #5246) [Link]

There was a lengthier comment that indicated it wasn't a "4K on x86 vs. 8K on x86-64" situation that was quoted over on KernelTrap. That perhaps biased my reading of the quote above to not read the same into it that you did. That exchange was:

From: Eric Sandeen <sandeen@...>
Subject: Re: x86: 4kstacks default
Date: Apr 19, 10:36 pm 2008

Arjan van de Ven wrote:

> On the flipside the arguments tend to be
> 1) certain stackings of components still runs the risk of overflowing
> 2) I want to run ndiswrapper
> 3) general, unspecified uneasyness.
> 
> For 1), we need to know which they are, and then solve them, because even on x86-64 with 8k stacks
> they can be a problem (just because the stack frames are bigger, although not quite double, there).

Except, apparently, not, at least in my experience.

Ask the xfs guys if they see stack overflows on x86_64, or on x86.

I've personally never seen common stack problems with xfs on x86_64, but
it's very common on x86.  I don't have a great answer for why, but
that's my anecdotal evidence.

I agree that without this additional context it's easy to interpret the shorter quote the way you did. Sorry about that.

4K stacks by default?

Posted Apr 24, 2008 18:46 UTC (Thu) by sniper (subscriber, #13219) [Link]

From: http://www.x86-64.org/documentation/abi.pdf

Registers is the correct answer. Check out the section on passing parameters.

Example:

typedef struct {
  int a, b;
  double d;
} structparm;
structparm s;
int e, f, g, h, i, j, k;
long double ld;
double m, n;
extern void func (int e, int f,
                  structparm s, int g, int h,
                  long double ld, double m,
                  double n, int i, int j, int k);
func (e, f, s, g, h, ld, m, n, i, j, k);


General Purpose  Floating Point    Stack Frame Offset
%rdi: e          %xmm0: s.d        0:  ld
%rsi: f          %xmm1: m          16: j
%rdx: s.a,s.b    %xmm2: n          24: k
%rcx: g
%r8:  h
%r9:  i

4K stacks by default?

Posted Apr 25, 2008 3:05 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

The stack doesn't overflow on x86-64 because it passes parameters in registers instead of on the stack?

Doesn't that just mean there are more registers that have to be saved on the stack?

There's the same total amount of state in the call chain either way; it has to be stored somewhere.

4K stacks by default?

Posted Apr 25, 2008 4:22 UTC (Fri) by jzbiciak (✭ supporter ✭, #5246) [Link]

Hardly.

If parameters are passed on the stack, the argument frame basically exists on the stack for the entire duration of the function. If those same arguments are passed in registers, the arguments exist only as long as they're needed. If they're unused, consumed before a funtion call or passed down the call chain, they don't need to go to the stack.

The only things that need to go on the stack as you go down the call chain are values that are live across the call that don't have other storage--compiler temps and arguments are used after the call.

I haven't looked at the document linked above, but I wouldn't be surprised if the x86-64 calling convention also splits the GPRs between caller-saves vs. callee-saves, thereby also reducing the number of slots reserved for values live-across calls.

Separate of compiler temps and live-across call values are spill values. In my experience, modern compilers allocate a stack frame once at the start of a function and maintain it through the hlife of the function (alloca() being a notable exception, allocating beyond the static frame). If a function has a lot of spilled values, these too get statically allocated. x86 has less than half as many general purpose registers as x86-64, resulting in greater numbers of spilled variables as well.

Make sense?

How about an example? Here's the function prolog from ay8910_write in my Intellivision emulator, compiled for x86:

ay8910_write:
    subl    $60, %esp   #,

The function allocates a 60 byte stack frame for itself, in addition to 12 bytes for arguments 2 through 4. (Only the first argument gets passed in a register as I recall). That's 72 bytes. Here's the same function prolog on x86-64:

ay8910_write:
    movq    %r13, -24(%rsp) #,
    movq    %r14, -16(%rsp) #,
    movq    %rdi, %r13  # bus, bus
    movq    %r15, -8(%rsp)  #,
    movq    %rbx, -48(%rsp) #,
    movl    %edx, %r15d # addr, addr
    movq    %rbp, -40(%rsp) #,
    movq    %r12, -32(%rsp) #,
    subq    $56, %rsp   #,

This version allocated 56 bytes, and had all its arguments passed in registers. That's 16 bytes smaller.

I picked this function not because it's some extraordinary function, but rather because it's moderately sized with a moderate number of arguments, and it's smack dab in the middle of a call chain. And it's in production code.

4K stacks by default?

Posted Apr 25, 2008 17:41 UTC (Fri) by NAR (subscriber, #1313) [Link]

That's interesting. I thought that the local variables are stored also on the stack and if you
have pointers or integers which are bigger on x86-64, than the storage needed for these
variables on the stack are also bigger. Of course, the clever compiler can optimize these
variables to registers...

4K stacks by default?

Posted Apr 25, 2008 20:39 UTC (Fri) by nix (subscriber, #2304) [Link]

Generally, even if locals live in registers they'll get stack slots 
assigned, because you have to store the locals somewhere across function 
calls. (Completely trivial leaf functions with almost no variables *might* 
be able to get away without it, but that's not the common case.)

4K stacks by default?

Posted Apr 25, 2008 21:12 UTC (Fri) by jzbiciak (✭ supporter ✭, #5246) [Link]

They should only *need* to get stored if

1. They're live-across-call and there are no callee-save registers to park the values in.
2. They get spilled due to register pressure.
3. Their address gets taken.
4. Their storage class requires storing to memory (e.g. volatile).

And there could be other reasons where it *might* end up on the stack, such as:

5. The compiler isn't able to register allocate the type--this happens most often with
aggregates.
6. Compilation / debug model needs it on the stack.
7. Cost model for the architecture suggests register allocation for the variable isn't a win.

#1 above is actually pretty powerful.  Texas Instruments' C6400 DSP architecture has 10
registers that are callee-save and the first 10 arguments of function calls are passed in
registers.  The CPU has 64 registers total.  All these work together to absorb and eliminate
quite a bit of stack traffic on that architecture.  

I'm less familiar w/ GCC, the x86 and x86-64 ABIs and how they work, which prompted my
original question.

4K stacks by default?

Posted Apr 25, 2008 21:29 UTC (Fri) by jzbiciak (✭ supporter ✭, #5246) [Link]

In that last bit of comment, I should say "the notion of having some number of callee-save
registers" is pretty powerful.  If a function doesn't use very many registers, it may never
have to touch the callee-save registers.  If a caller only has a handful of live-across-call
variables, it may be able to fit them entirely into callee-save registers.  

This limits stack traffic in the body of the function dramatically, causing some additional
traffic at the edges of the mid-level function to save/restore the callee-save registers.
Those save/restore sequences tend to be fairly independent of the rest of the code, too, which
works well on dynamically scheduled CPUs.

4K stacks by default?

Posted Apr 29, 2008 21:02 UTC (Tue) by gswoods (subscriber, #37) [Link]

The main problem I've had with Fedora's 4K stacks involves using modems. Are there any modems
out there that can be purchased new, provide full access to the AT command set (so that an
answering machine can be implemented using vgetty) and can handle faxing, that DON'T require a
proprietary driver and NDISwrapper?

Not a big deal for embedded

Posted May 1, 2008 15:20 UTC (Thu) by klossner (subscriber, #30046) [Link]

I disagree that 4K stacks are significant in the embedded world.  We only run a few dozen
processes so the footprint is not much, and we don't start thousands of new processes per hour
so the O(1) allocation doesn't matter.  On the other hand, it's *really* important to us that
corner cases which don't fit in 4K not cause a problem.

Copyright © 2008, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds