User: Password:
|
|
Subscribe / Log in / New account

4K stacks by default?

4K stacks by default?

Posted Apr 24, 2008 10:50 UTC (Thu) by MathFox (guest, #6104)
In reply to: 4K stacks by default? by scarabaeus
Parent article: 4K stacks by default?

The page fault code needs stack too...

IIRC the Linux developers made the explicit decision that Kernel code and dara will always be
in RAM; a page fault from kernel code is a reason to panic. If you want to make kernel code
and data "demand pagable" you must take care that all code (and data) needed for paging in the
swapped out kernel pages is locked in RAM. "The data I need to load this page is only
available in the swap." Linux systems can swap to a local file system or over the network, a
lot of code (and data) would have to be locked to keep the system running. The kernel gurus
decided that 100% was far easier to manage.


(Log in to post comments)

4K stacks by default?

Posted Apr 25, 2008 3:02 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

I think a more fundamental problem with paging in extra stack when you need it is that for a stack to work, it has to be in contiguous address space. The addresses past the end of the stack aren't available when you need them.

I believe address space is a more scarce resource than physical memory on many systems these days.

4K stacks by default?

Posted Apr 25, 2008 18:11 UTC (Fri) by scarabaeus (guest, #7142) [Link]

Thanks for your comments! :-) But I still don't understand...

I'm not proposing to swap out kernel stack pages. Instead, I'm wondering why it isn't possible
to just allocate additional memory pages for the stack the moment a page fault happens because
the currently allocated stack overflows.

This assumes that it is possible to just map in additional pages. Why does the stack have to
be in contiguous memory - is it addressed via its physical address?

If so, is the cost of setting up virtual page mapping too high? The event that new stack pages
would have to be allocated would be very rare, so it wouldn't have to be fast...

4K stacks by default?

Posted Apr 25, 2008 20:41 UTC (Fri) by nix (subscriber, #2304) [Link]

If a page fault happens, you might need to swap pages out in order to 
satisfy the request for an additional page. You might think you could just 
use GFP_ATOMIC allocation for this, but the pages have to be contiguous 
(which might involve memory motion and swapping on its own), and if a lot 
of processes all need extra stack at once you'll run short on the free (-> 
normally wasted) memory available for GFP_ATOMIC allocations.

4K stacks by default?

Posted Apr 25, 2008 21:31 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

Why does the stack have to be in contiguous memory - is it addressed via its physical address?

Contiguous virtual memory. That's what I meant by the address space being the scarce resource. We can afford to allocate 4K of virtual addresses when the process is created, but we can't afford to allocate 8K of them even if the 2nd 4K aren't mapped to physical memory until needed.

With a different memory layout, Linux might not have that problem. Some OSes put the kernel stack in a separate address space for each process. But Linux puts all of the kernel memory, including every process' stack, in all the address spaces. So even if the kernel were pageable, there would still be a virtual address allocation problem.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds