User: Password:
|
|
Subscribe / Log in / New account

Virtual Memory I: the problem

Virtual Memory I: the problem

Posted Mar 21, 2004 1:02 UTC (Sun) by alpharomeo (guest, #20341)
Parent article: Virtual Memory I: the problem

Some questions/comments:

1) Why not cause the kernel to manage memory in de fact pages larger than 4K? A larger page is how other OS's manager large memory efficiently, whether we are talking 32 bit or 64 bit addressing. To avoid breakage, why not keep the current 4K page size but have the kernel always allocate/free pages in blocks of, say, 16 pages, or even 256 pages? Then the page table would be vastly smaller.

2) It may be convenient to say "use a 64 bit machine". The fact is that 64 bit addressing is inefficient and overkill for many situations. Experience with several architectures that support simultaneous 32-bit and 64-bit applications has shown that the 64-bit builds run at least 25-30% slower than the corresponding 32-bit builds. Bigger addresses means bigger programs, bigger stacks and heaps, etc.. Some applications may require nearly twice as much memory when run in 64-bit mode. So, why not optimize the kernel to provide better support for 32-bit addressing? In particular, what is so wrong with supporting infinite physical memory but limiting process address space to 4 GB (or 3 GB)?

3) Why is shared memory such a big problem? We have never been able to get Linux to allow shared memory segments larger than slightly less than 1 GB. Is there some trick to it? Linux reports that there is insufficient swap space, but it does not matter how many swap partitions you allocate - you always get the same error.

Thanks!


(Log in to post comments)


Copyright © 2018, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds