User: Password:
|
|
Subscribe / Log in / New account

Virtual Memory I: the problem

Virtual Memory I: the problem

Posted Mar 11, 2004 18:36 UTC (Thu) by mmarkov (guest, #4978)
Parent article: Virtual Memory I: the problem

If the kernel wishes to be able to access the system's physical memory directly, however, it must set up page tables which map that memory into the kernel's part of the address space. With the default 3GB/1GB mapping, the amount of physical memory which can be addressed in this way is somewhat less than 1GB - part of the kernel's space must be set aside for the kernel itself, for memory allocated with vmalloc(), and various other purposes.
Honestly, I don't understand here why only 1GB is accessible under these premises.

PS Great article, Jon. In fact, great articles, both part I and part II.


(Log in to post comments)

Virtual Memory I: the problem

Posted Mar 11, 2004 22:17 UTC (Thu) by jmshh (guest, #8257) [Link]

The keyword here is "directly", i.e. without any manipulation of page
tables. So all physical RAM has to live inside the 1GB virtual address
space of the kernel, together with some other stuff, like video buffers.

Why only 1 G is directly accessible

Posted Mar 12, 2004 9:51 UTC (Fri) by Duncan (guest, #6647) [Link]

First, keep in mind that we are talking about a less than 4 gig address
space, the physical limit of the "flat" memory model, 32-bits of address,
with each address serving one byte of memory. One can of course play with
the byte-per-address model and make it, say, two bytes or a full 32-bit
4-bytes, but there again, we get into serious compatibility problems with
current software that assumes one-byte handling. The implications of that
would be HUGE, and NOBODY wants to tackle the task of ensuring
4-byte-per-address clean code, since the assumption has been
byte-per-address virtually forever and virtually ALL programs have that
axiom written so deep into their code you might as well start over again
(which is sort of what Intel argued should be the case with Itanic, clean
start approach, anyway, taking the opportunity to move cleanly to 64-bit,
which is why it never really took off, but that's an entirely different
topic). It's simply easier to move to 64-bit address space than to tinker
with the byte-per-address assumption. Thus, 32-bit is limited to 4-gig of
directly addressable memory in any practical case.

Another solution, as generally used back in the 16-bit era, is called
"segmented" memory. The address back then consisted of a 16-bit "near"
address, and a 16-bit "segment" address. The issue, as one would expect,
ammounted to one of performance. It was comparatively fast to access
anything within the same segment, much slower to address anything OUT of
the segment. As it happened, 64k was the segment size, and if you
remember anything from that era, it might be that editors, for instance,
quite commonly had a limit on the size of the editable file of somewhat
less than 64k, so they could access both their own operational memory AND
the datafile being edited, all within the same 64k segment. However, the
benefits of "flat" memory are such that few want to go back to a segmented
memory model, if at all possible to stay away from it. (That said, the
various high memory models do essentially that, but try to manage it at
the system level so at least individual applications don't have to worry
about it, as they did back in the 16-bit era.)

That still doesn't "address" (play on words intentional) the lower 1-gig
kernel-space, 3-gig user-space, "soft" limit. As you mention, yes, in
theory the kernel /can/ address the full 4-gig. The problem, however, is
hinted at elsewhere in the article where it talks about the 4G/4G patch --
use all available direct address space for the kernel, and switching
between usermode and kernelmode becomes even MORE tremendously expensive
than it already is, in performance terms, because if they use the same
address space, the entire 4-gig "picture" has to be flushed (more on that
below), so the new "picture" of the other mode can be substituted without
losing data. As explained in the article each mode then has to manage its
own memory picture, and the performance issues of flushing that picture so
another one can replace it at each context switch are enormous.

As already mentioned in other replies, there are a number of solutions,
each with their own advantages and disadvantages. One is the 2G/2G split,
which BTW is what MSWormOS uses. This symmetric approach allows both the
kernel and userspace to access the same four gig maximum "picture", each
from their own context, but sharing the picture, so the performance issues
in flushing it don't come into play. It does give the kernel more
comfortable room to work in, but at the expense of that extra gig for
userspace. While few applications need more than their two-gig share of
memory to work in, the very types of applications that do, huge database
applications and other such things, happen to be run on the same sorts of
systems that need that extra room for the kernel.. huge enterprise systems
with well over eight gig of physical memory. Thus, the 2G/2G solution is
a niche solution that will fit only a very limited subset of those running
into the problem in the first place. The 4G/4G solution is more practical
-- EXCEPT that it carries those huge performance issues. Well, there's
also the fact that even a 4G/4G solution only doubles the space available
to work with, and thus is only a temporary solution at best, perhaps two
years worth, maybe 3-4 by implimenting other "tricks" with their own
problems, even if the base performance issue didn't apply. That's where
the next article comes in.

The loose end left to deal with is that flushing, mentioned above. I must
admit to not fully understanding this myself, but a very simplistic view
of things would be to imagine a system with 8 gig of physical memory,
dealt with using the previously mentioned "segments", of which there would
be two, one each for userspace and kernel space. A mode switch would then
simply mean changing the segment reference, ensuring all cache memory is
flushed out to the appropriate segment before one does so, of course.

Practice of course doesn't match that concept perfectly very often at all,
however, and even if a system DID happen to have exactly eight gig of
memory, such a simplistic model wouldn't work in real life because of
/another/ caveat.. that being that each application has its own virtual
address space map, and few actually use the entire thing, so one would be
writing to swap (a 100 to 1000 times slower solution than actual memory,
so a generally poor solution if not absolutely necessary) entirely
unnecessarily with only one application being runnable at once.

That of course is where vm=virtual memory comes in. as it allows all the
space unused by one app or the kernel itself to be used by another, with
its own remapping solutions. However, that's the part I don't really
understand, so won't attempt to explain it. Besides, this post is long
enough already. <g> Just understand that flushing is a necessary process
of low enough performance that it should be avoided if possible, and that
the concept is one of clearing the slate so it can be used for the new
memory picture, while retaining the data of the first one so it can be
used again.

Duncan


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds