LWN: Comments on "An end to high memory?" https://lwn.net/Articles/813201/ This is a special feed containing comments posted to the individual LWN article titled "An end to high memory?". en-us Sun, 05 Oct 2025 10:24:20 +0000 Sun, 05 Oct 2025 10:24:20 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net An end to high memory? https://lwn.net/Articles/921562/ https://lwn.net/Articles/921562/ ringerc <div class="FormattedComment"> Beware of removing high memory.<br> <p> The 16 Exabyte boundary will be a problem before we know it šŸ˜€ <br> </div> Mon, 30 Jan 2023 01:57:11 +0000 An end to high memory? https://lwn.net/Articles/814285/ https://lwn.net/Articles/814285/ neilbrown <div class="FormattedComment"> <font class="QuotedText">&gt; Any idea what devices are actually affected by this?</font><br> <p> I know about precisely one MIPS board - the MT7621A that you have already mentioned. So you already know more than me :-(<br> <p> Maybe ask on linux-mips ??<br> </div> Sun, 08 Mar 2020 23:46:59 +0000 An end to high memory? https://lwn.net/Articles/814012/ https://lwn.net/Articles/814012/ arnd <div class="FormattedComment"> Any idea what devices are actually affected by this?<br> <p> As far as I can tell, most MIPS32r2 based chips are limited to less than 512MB anyway by having only a single DDR2 channel, your MT7621A being a notable exception that has up to 512MB DDR3. <br> <p> Out of the chips that support more than 512MB, the ones based on MIPS32r3 or higher (Baikal, Intel/Lantiq, ...), may be able to use th eextended virtual addressing to allow larger direct lowmem mappings with a bit of work.<br> <p> The Creator CI20 is one that has 1GB of RAM on MIPS32r2, and I can see that the bmips defconfigs enable highmem support, so I would guess they also need it, but I could not find any actual products. Are there others?<br> </div> Fri, 06 Mar 2020 14:19:28 +0000 An end to high memory? https://lwn.net/Articles/813997/ https://lwn.net/Articles/813997/ khim <p>Indeed IBM was never concerned about memory limits and haven't planned for that infamous 640KiB barrier to ever exist.</p> <p>You opponent says <i>IBM didn't have to reserve ā…œ of the (20-bit) address space for the UMA, 256 KiB or even only 128 KiB would have been possible too</i>… but IBM <b>haven't</b> reserved ā…œ of the address space for that! Look on the <b>System Memory Map</b> <a href="http://bitsavers.informatik.uni-stuttgart.de/pdf/ibm/pc/pc/6025008_PC_Technical_Reference_Aug81.pdf">in the manual</a>. It places "128KB RESERVED GRAPHIC/DISPLAY BUFFER" at 256K position!</p> <p>XT <a href="http://bitsavers.informatik.uni-stuttgart.de/pdf/ibm/pc/xt/1502237_PC_XT_Technical_Reference_Apr83.pdf">acknowledged</a> the fact that you may actually add more RAM, but even AT manual <a href="http://bitsavers.informatik.uni-stuttgart.de/pdf/ibm/pc/at/6183355_PC_AT_Technical_Reference_Mar86.pdf">says it's not a standard use but option which requires an addon card</a>!</p> <p>In fact that's how IBM Itself perceived it till PS/2: that's why they were so happy to allow IBM PC DOS 4 to become so much larger than IBM PC DOS 3. The idea was: "hey, 512K is standard now, we could offer option card we talked around long ago and people would get bigger DOS, yet more space for programs, too.</p> <p>Only by that time ā€œan optionā€ have already become de-facto ā€œstandardā€ and many packages needed 640KiB with DOS 3…</p> <p>P.S. And after all that IBM decided that it's doesn't matter how much BIOS takes since everyone would soon use OS/2 anyway… and introduced ABIOS… but that's another story…</p> Fri, 06 Mar 2020 11:32:04 +0000 An end to high memory? https://lwn.net/Articles/814001/ https://lwn.net/Articles/814001/ kpfleming <div class="FormattedComment"> Sorry, that wasn't quite what I meant to say. I follow the stream of kernel updates provided by the Raspberry Pi foundation, so there are frequent kernel updates. If they can deliver an 'update' from a 32-bit kernel to a 64-bit kernel without requiring significant changes in the userspace tools installed on the machine, then users could be upgraded in place.<br> </div> Fri, 06 Mar 2020 11:09:10 +0000 An end to high memory? https://lwn.net/Articles/814000/ https://lwn.net/Articles/814000/ geert <div class="FormattedComment"> So they keep on using their current kernel, which includes highmem support.<br> </div> Fri, 06 Mar 2020 11:04:19 +0000 An end to high memory? https://lwn.net/Articles/813999/ https://lwn.net/Articles/813999/ kpfleming <div class="FormattedComment"> It can be done today, if you're willing to give up support for some of the onboard peripherals. Even when there is full hardware support, it's likely that most users won't convert their existing systems.<br> </div> Fri, 06 Mar 2020 10:59:07 +0000 An end to high memory? https://lwn.net/Articles/813986/ https://lwn.net/Articles/813986/ geert <div class="FormattedComment"> But one day the Raspberry Pi 4B will run a 64-bit kernel, solving the problem, right?<br> </div> Fri, 06 Mar 2020 09:39:02 +0000 An end to high memory? https://lwn.net/Articles/813917/ https://lwn.net/Articles/813917/ kpfleming <div class="FormattedComment"> Pretty much every Raspberry Pi 4B sold with 2GB or 4GB of RAM is in this situation, because the default (and still the only well-supported) kernel is a 32-bit kernel. There are large numbers of these, and they are being sold and installed today, so can be expected to be in use for quite some time.<br> </div> Thu, 05 Mar 2020 18:52:30 +0000 An end to high memory? https://lwn.net/Articles/813689/ https://lwn.net/Articles/813689/ neilbrown <p>32bit MIPS needs high memory just to make use of 512MB of RAM. The first 448MB are directly mapped, then there is IO space, then there is the rest of RAM as high-mem. <p> Maybe this "interesting" address space layout could be managed differently to avoid the dependency on highmem - I don't know. <pre> [ 0.000000] MIPS: machine is GB-PC1 [ 0.000000] Determined physical RAM map: [ 0.000000] memory: 1c000000 @ 00000000 (usable) [ 0.000000] memory: 04000000 @ 20000000 (usable) ... [ 0.000000] Zone ranges: [ 0.000000] Normal [mem 0x0000000000000000-0x000000001fffffff] [ 0.000000] HighMem [mem 0x0000000020000000-0x0000000023ffffff] ... </pre> Mon, 02 Mar 2020 19:31:51 +0000 An end to high memory? https://lwn.net/Articles/813615/ https://lwn.net/Articles/813615/ dave4444 <div class="FormattedComment"> Having worked on many 32bit CPUs with large memory, this is somewhat nistalgic, but it's time is probabaly near the end. And yes, there were many 32bit ARM CPUs with large amounts of memory (such as some NPUs!). Intel also had some SKUs with 32bit cores long after consumer and server chips supported 64bit, notibly some SoCs and Atom CPUs. Deprecating is probably due (especially for some archs).<br> <p> Lets also not forget about highmem's ugly step child, the bounce buffer for IO devices that DMA to/from high memory. Oh the headaches from that.<br> <p> <p> </div> Mon, 02 Mar 2020 01:05:32 +0000 An end to high memory? https://lwn.net/Articles/813581/ https://lwn.net/Articles/813581/ ldearquer <div class="FormattedComment"> I'd be sorry to see highmem going, now that I *finally* got a decent understanding of what it is and where it came from :) Thanks for a great article!<br> </div> Sun, 01 Mar 2020 12:32:47 +0000 An end to high memory? https://lwn.net/Articles/813568/ https://lwn.net/Articles/813568/ willy <div class="FormattedComment"> Ah, I see the ambiguity in what flussence said now.<br> <p> Yes, you're correct, only 896MB of physical RAM is usable unless you enable at least HIGHMEM4G.<br> </div> Sat, 29 Feb 2020 19:29:02 +0000 An end to high memory? https://lwn.net/Articles/813567/ https://lwn.net/Articles/813567/ nivedita76 <div class="FormattedComment"> Userspace gets 3Gb address space by default whether HIGHMEM is enabled or not, but I thought you can only access 896Mb of physical memory if NOHIGHMEM, regardless of how much you have installed -- HIGHMEM4G is what will let you access the rest of RAM, no?<br> </div> Sat, 29 Feb 2020 17:56:45 +0000 An end to high memory? https://lwn.net/Articles/813564/ https://lwn.net/Articles/813564/ willy <div class="FormattedComment"> With HIGHMEM entirely disabled, on x86, userspace gets 3GB of address space, the kernel gets 896MB of physical memory, and the remainder is used for PCI BARs, vmalloc space and random other gunk.<br> </div> Sat, 29 Feb 2020 12:25:55 +0000 Updated list of machines with 4GB or more https://lwn.net/Articles/813561/ https://lwn.net/Articles/813561/ arnd <div class="FormattedComment"> <font class="QuotedText">&gt; How can be posible that a compiler produces different results depending on the kernel running the compiler (instead of depending on the kernel targeted by the compiler)?</font><br> <p> This is about building entire packages, not just a single source file, so a lot of differences between native 32-bit kernels and compat mode can be relevant, though it would usually be a bug in a package if they are:<br> <p> * The contents of /proc/cpuinfo, uname or the cpuid registers are different, which can confuse configure scripts in all kinds of ways, such as thinking they are cross-compiling when they should be building natively, or being unable to parse the incompatible /proc/cpuinfo format. Simply changing these to look like an ARMv7 CPU would be bad for applications that have a legitimate interest in finding out the actual CPU type on a 64-bit kernel.<br> <p> * For doing local builds, upstream maintainers may default to using -march=native, but when building a distro package you normally want to target the minimum supported CPU instead. This bug can also hit when building an ARMv6 package on an ARMv7VE host just as it can when building an i586 package on Skylake x86-64 host.<br> <p> * The compat mode in the kernel is a pretty good approximation of the native interfaces, but there are always a few differences -- usually those are bugs in compat handling, but there is also sysvipc being broken on native sparc32 kernels or such as arm32 kernels traditionally not having NUMA syscalls, both of which worked fine on 64-bit compat mode.<br> <p> * On 64-bit kernels, you usually have 4GB of virtual address space for user space, while native kernels have only 3GB or less. This is great for most applications, but it can cause problems in a runtime environment that treats any high pointer value as a special cookie.<br> <p> Some of these can be worked around using the 'personality' interfaces, others should probably be configurable the same way, and some should just be considered bugs that need to be fixed in upstream user space packages. I personally think the build systems should use 64-bit kernels with 32-bit user space and fix the resulting bugs, but it's not my decision.<br> <p> <font class="QuotedText">&gt; How crosscompilers even work then?</font><br> <p> You can probably cross-compile a majority of all packages in a distro these days, but there are enough exceptions that Debian or OBS decide to only build packages natively.<br> </div> Sat, 29 Feb 2020 10:48:04 +0000 Updated list of machines with 4GB or more https://lwn.net/Articles/813562/ https://lwn.net/Articles/813562/ rossburton I’m guessing the point was to do native and not cross compilation. Sat, 29 Feb 2020 10:12:47 +0000 An end to high memory? https://lwn.net/Articles/813555/ https://lwn.net/Articles/813555/ mwsealey <div class="FormattedComment"> I didn’t consider the security of mapping the entire physical address space into kernel virtual space but that’s another good reason not to do it.<br> <p> ā€œLowā€ or pinned memory for kernel allocations needs to be there, I don’t deny that. However as a way of providing watermarks for particular kinds of allocations is it *really* ā€œfasterā€ to not have to map it?<br> <p> For 64-bit kernels not having the linear map of an arbitrary amount of physical memory and just having those watermarks will still divide throwaway pages from fixed ones. For 32-bit kernels they hit HIGHMEM pressure so quickly it doesn’t makes sense to me ā€œreserveā€ the memory used for non-HIGHMEM regions.<br> <p> In any case by removing the linear map no physical address is mapped &gt;=twice to virtual addresses without there being an extremely specific purpose for it. Userspace pages will never ā€œturn upā€ in the middle of kernel space, or alias with other memory types.<br> <p> For Arm the linear map is just trouble - I spend a good deal of time running training courses on Arm architecture and it is always disappointing to have to point out that a particular architectural feature like fine grained memory type specification is just a pain in the backside in Linux.<br> <p> If you have a need to mark something as only outer cacheable or not-outer-shareable (and your HW supports that) the possibility that it also exists in the linear map as inner cacheable and outer shareable too just defeats any efficiencies you could gain from it. It serves to promote all cache activity to ā€œL1 vs. DRAMā€ and all DVM transactions towards all other devices with nothing lighter in between. The x86-ism of poor TLB functionality and limited memory types and coherency traffic spamming leaks to other architectures like a plague.<br> <p> Getting rid of HIGHMEM and making everything a linear mapped address range just codifies that x86-ism for 64-bit, while crippling 32-bit. I can’t say I like either idea. I’d rather see HIGHMEM live on and we reduce our requirement (and subsequently watermarks) for NORMAL. Spectre/Meltdown and the rise of virtualization have helped now x86 cores are gaining more efficient TLB flush operations, working ASIDs et al.<br> <p> I reckon it’s worth investigating, for all we know it could surprisingly end up more performant and with less side effects and give Linux a few extra features on top of a more fine-grained memory management subsystem. I’m failing to find significant parts of mm that are actually hardcore enough to rely on the linear mapping and don’t have a ā€œslow pathā€ through using high memory (what requires it will allocate it, and then it’s doing the job it is meant to do), and from what I’ve seen the machinations around contiguous/large pages and other zone allocators like CMA already implement all you’d need to efficiently divorce the kernel from having to keep the linear map around.<br> </div> Sat, 29 Feb 2020 02:57:03 +0000 Updated list of machines with 4GB or more https://lwn.net/Articles/813556/ https://lwn.net/Articles/813556/ clopez <div class="FormattedComment"> <font class="QuotedText">&gt; Calxeda Midway servers used in build farms for native 32-bit builds to avoid differences in compilation results between running on 32-bit and 64-bit kernels, usually with 16GB of RAM.</font><br> <p> How can be posible that a compiler produces different results depending on the kernel running the compiler (instead of depending on the kernel targeted by the compiler)?<br> How crosscompilers even work then?<br> </div> Sat, 29 Feb 2020 01:23:41 +0000 Updated list of machines with 4GB or more https://lwn.net/Articles/813544/ https://lwn.net/Articles/813544/ arnd <div class="FormattedComment"> I talked to a number of people about this at EmbeddedWorld and elswhere to figure out who is affected. There were many more examples of 2 GB systems with 32-bit ARM processors, which in theory could use a modified VMSPLIT_2G_OPT mapping. Systems with more than that, in no particular order, include:<br> <p> - Calxeda Midway servers used in build farms for native 32-bit builds to avoid differences in compilation results between running on 32-bit and 64-bit kernels, usually with 16GB of RAM.<br> - Most TI keystone-2 systems (sometimes up to 8GB): <a href="https://lwn.net/ml/linux-kernel/7c4c1459-60d5-24c8-6eb9-da299ead99ea@oracle.com/">https://lwn.net/ml/linux-kernel/7c4c1459-60d5-24c8-6eb9-d...</a><br> - Baikal T1 (MIPS, not ARM) with up to 8GB: <a href="https://www.t-platforms.ru/production/products-on-baikal/">https://www.t-platforms.ru/production/products-on-baikal/</a><br> - Dragonbox Pyra game consoles with TI OMAP5: <a href="https://pyra-handheld.com/boards/pages/pyratech/">https://pyra-handheld.com/boards/pages/pyratech/</a><br> - Novena Laptop: <a href="https://www.crowdsupply.com/sutajio-kosagi/novena">https://www.crowdsupply.com/sutajio-kosagi/novena</a><br> - SolidRun CuBox Pro i4x4 (early models only) <a href="https://www.solid-run.com/solidrun-introduces-4gb-mini-computer-cubox-i-4x4-2/">https://www.solid-run.com/solidrun-introduces-4gb-mini-co...</a><br> - Tegra K1 and Rockchips RK3288 based Chromebooks: <a href="https://www.chromium.org/chromium-os/developer-information-for-chrome-os-devices">https://www.chromium.org/chromium-os/developer-informatio...</a><br> - Very rare industrial NXP i.MX6 and Renesas RZ/G1 systems (most board manufacturers I talked to said they never sold 4GB options, or never even offered them because of 8Gbit DDR3 availability and cost reasons)<br> </div> Fri, 28 Feb 2020 16:05:40 +0000 An end to high memory? https://lwn.net/Articles/813547/ https://lwn.net/Articles/813547/ cpitrat <div class="FormattedComment"> <a href="https://en.wikipedia.org/wiki/Time_flies_like_an_arrow;_fruit_flies_like_a_banana">https://en.wikipedia.org/wiki/Time_flies_like_an_arrow;_f...</a><br> </div> Fri, 28 Feb 2020 15:26:37 +0000 An end to high memory? https://lwn.net/Articles/813545/ https://lwn.net/Articles/813545/ Deleted user 129183 <div class="FormattedComment"> <font class="QuotedText">&gt; Fruit flies like a banana.</font><br> <p> I’m pretty sure that round fruit fly like an apple, not like a banana. In fact, are there any other fruit shaped like a banana besides bananas? Because flight of bananas may be actually _sui generis_.<br> </div> Fri, 28 Feb 2020 15:19:26 +0000 An end to high memory? https://lwn.net/Articles/813495/ https://lwn.net/Articles/813495/ arnd <div class="FormattedComment"> I looked this up a few days ago, and the original patch set was from 2003 and only posted for review [<a href="https://lore.kernel.org/lkml/Pine.LNX.4.44.0307082332450.17252-100000@localhost.localdomain/">https://lore.kernel.org/lkml/Pine.LNX.4.44.0307082332450....</a>] briefly then. It notably made it into the RHEL4 distro, but never into mainline kernels. Back then the main downside was that x86 CPUs of the time lacked a address space identifiers, so changing between user space and kernel always required flushing all TLBs, which has an enormous performance impact for system calls and page faults.<br> <p> However, it does seem possible to use a 4G vmsplit on 32-bit ARM (probably also MIPS or others) ASIDs to avoid the TLB flush and keep the performance impact way down. On ARM with LPAE, we could e.g. use the split page tables to have the top 256MB (or another power-of-two size) reserved for vmalloc pages, modules, MMIO and the kernel image, while the lower 3.75GB are mapped to either user space or a large linear map. There is some extra overhead for copy_to_user() etc, but also extra isolation between user and kernel addresses that may provide security benefits.<br> <p> It's also interesting that the original VMPLIT_4G_4G patches were not even intended to replace highmem, but to allow configurations with 32GB or 64GB of physical RAM that would otherwise fill a lot of the 896MB lowmem area with 'struct page' structures even before you start allocating inodes, dentries or page tables.<br> </div> Fri, 28 Feb 2020 14:56:36 +0000 An end to high memory? https://lwn.net/Articles/813490/ https://lwn.net/Articles/813490/ farnz <p>In the original PC, they didn't set a 640 KiB limit (that comes in with the EGA card. Original IBM PCs with MDA displays have a 704 KiB limit, CGA raises that to 736 KiB, and a theoretical video adapter using I/O ports instead of a memory mapped buffer could raise it to 768 KiB. <p>Honestly, it looks like IBM never really thought about the real mode address limits; EGA lowered it to 640 KiB, but that comes in with the 80286 in the PC/AT, which in theory could be run in protected mode and thus not have issues around the 1 MiB limit. OS/2 1.x could thus have ensured we never needed to know about HMA, UMA, XMS etc, had IBM's vision been successful, and hence the "640 KiB" limit of their 8088 products would never have mattered. It's just that IBM failed in delivering its vision, and thus we kept treating x86 systems as ways to run DOS for far longer than intended. Fri, 28 Feb 2020 14:18:51 +0000 An end to high memory? https://lwn.net/Articles/813489/ https://lwn.net/Articles/813489/ leromarinvit <div class="FormattedComment"> That wouldn't really be a "split" though - it would mean that all mappings have to be changed when entering and exiting the kernel, exactly what the split tried to prevent.<br> <p> I wonder what the performance impact of just getting rid of the split in this manner would be. Isn't the performance advantage already rendered moot by KPTI?<br> </div> Fri, 28 Feb 2020 13:12:46 +0000 An end to high memory? https://lwn.net/Articles/813488/ https://lwn.net/Articles/813488/ ecm <div class="FormattedComment"> That is all true. However, IBM didn't have to reserve 3/8 of the (20-bit) address space for the UMA, 256 KiB or even only 128 KiB would have been possible too. So the specific 640 KiB limit wasn't really due to the CPU (nor the OS).<br> </div> Fri, 28 Feb 2020 11:48:03 +0000 An end to high memory? https://lwn.net/Articles/813484/ https://lwn.net/Articles/813484/ eru <div class="FormattedComment"> What about a 4g/4g split? I seem to recall some distributions had this kind of feature to allow more memory for userspace.<br> <p> </div> Fri, 28 Feb 2020 10:04:43 +0000 An end to high memory? https://lwn.net/Articles/813481/ https://lwn.net/Articles/813481/ cpitrat <div class="FormattedComment"> Welcome in the group of "the oldest". Time flies like an arrow ...<br> <p> <p> ... Fruit flies like a banana.<br> </div> Fri, 28 Feb 2020 07:50:42 +0000 An end to high memory? https://lwn.net/Articles/813460/ https://lwn.net/Articles/813460/ farnz <p>Note, though, that the IBM PC has that memory layout because the BIOS ROM has to be mapped at 0xFFFF0 on the 8086/8088 - CS:IP of 0xFFFF:0x0000 - because that's where the reset vector is, and you also want 0x00000 to 0x00400 to be in RAM because that's where the 8086/88 put their Interrupt Vector Table. <p>The Model 5150 was being built to a price point, and was in any case limited to 256 KiB RAM, so it wasn't worth the extra ICs needed to allow you to remap all but the first 1 KiB RAM and the BIOS ROM after reset, when you could leave them in place. <p>Arguably, things would have been different if IBM had used the Motorola 68k instead of the 8088; the 68k has its reset vector at address 0, and thus you'd naturally put your ROM at the beginning of address space. Thu, 27 Feb 2020 21:20:16 +0000 An end to high memory? https://lwn.net/Articles/813456/ https://lwn.net/Articles/813456/ flussence <div class="FormattedComment"> On x86 machines with highmem disabled it's even worse: the default for a long time has been to give userspace access to only the first 896MB(!) out of 4GB.<br> <p> There are hidden kconfig options to make that 2 or 3GB, and there are third-party patches to make the default 1GB. It's probably for compatibility with some ancient blob, because I can't imagine that being a good default to keep otherwise; there are quite a few 32-bit systems out there sitting between ā…ž and 4GB of RAM installed.<br> </div> Thu, 27 Feb 2020 20:52:15 +0000 An end to high memory? https://lwn.net/Articles/813454/ https://lwn.net/Articles/813454/ willy <div class="FormattedComment"> The problem is that everything you allocate inside the kernel has to either be in ZONE_NORMAL or be in ZONE_HIGHMEM and be explicitly mapped. So, eg, every time you call kmalloc(), that page has to be mapped in the 1GB of kernel space _somehow_. If we got rid of ZONE_NORMAL and made everything part of HIGHMEM and kmalloc mapped the page for you, it'd still have just as tight a fit. And worse performance due to every mapping being dynamic.<br> <p> As for undergraduates being able to figure out how the Linux MM works ... I don't think HIGHMEM is the problem here.<br> <p> We may end up with an option to map everything. Some of the security people are this as a solution to the various cache information leakage problems like meltdown and spectre. But it's likely to have a negative performance impact.<br> </div> Thu, 27 Feb 2020 20:41:47 +0000 An end to high memory? https://lwn.net/Articles/813455/ https://lwn.net/Articles/813455/ luto <div class="FormattedComment"> <font class="QuotedText">&gt; at the cost of a tiny bit of extra cache and TLB maintenance in certain cases</font><br> <p> On an architecture with lousy TLB maintenance facilities like x86, the expense would be *huge*.<br> </div> Thu, 27 Feb 2020 20:37:12 +0000 An end to high memory? https://lwn.net/Articles/813452/ https://lwn.net/Articles/813452/ Deleted user 129183 <div class="FormattedComment"> <font class="QuotedText">&gt; We'll start by noting, for the oldest among our readers, that it has nothing to do with the "high memory" concept found on early personal computers.</font><br> <p> Oldest? I was born in 1991 and still remember it. In fact I was really confused when I have read the title of this article, thinking that it refers to the HMA.<br> </div> Thu, 27 Feb 2020 20:35:44 +0000 An end to high memory? https://lwn.net/Articles/813444/ https://lwn.net/Articles/813444/ mwsealey <div class="FormattedComment"> I dare say another valid concern here is that the linear map consumes precious virtual address space that would otherwise be able to be used by active highmem pages. The linear map itself will exacerbate any use of memory outside the ~512MB of the low end of the physical address map on 32-bit, since it sacrifices about 60% of it to eke out performance for using that low region. Getting rid of the linear map and making EVERYTHING highmem might even out performance, as long as you could ensure that you could always pin those important pages into the virtual address space. There's no functional difference here in running out of linear mapped low mem space and running out of actual virtual address space. And there should be a way to prevent allocated regions of memory from being migrated 'out' of the virtual address space to fault that doesn't rely on the linear mapping already, making them stick around in 'low memory' by default.<br> <p> Just get rid of the linear map, and at the cost of a tiny bit of extra cache and TLB maintenance in certain cases, and not being able to cheap out on pointer math (not that this isn't being tracked many times over anyway) I think everyone's life gets easier and more computer science students would understand how the hell the kernel mm works! Also, no address aliasing.. <br> </div> Thu, 27 Feb 2020 19:28:18 +0000 An end to high memory? https://lwn.net/Articles/813436/ https://lwn.net/Articles/813436/ ecm <div class="FormattedComment"> <font class="QuotedText">&gt; We'll start by noting, for the oldest among our readers, that it has nothing to do with the "high memory" concept found on early personal computers. That, of course, was memory above the hardware-implemented hole at 640KB — memory that was, according to a famous quote often attributed to Bill Gates, surplus to the requirements of any reasonable user.</font><br> <p> This does not seem entirely clear.<br> <p> Actually, there is an area on the IBM PC and 86-DOS platform that is called the Upper Memory Area (UMA), which can hold things such as ROMs, video memory, EMS frames, and Upper Memory Blocks (UMBs). It usually lies between 640 KiB (linear A_0000h) up to 1024 KiB (10_0000h). It wasn't due to the CPU however, but due to the IBM PC's memory layout.<br> <p> The High Memory Area starts at 1024 KiB and reaches up to 1088 KiB minus 16 Bytes. This was introduced with the 286 when the physical memory was expanded to 24 address lines. This meant that even in Real 86 Mode (and on the 386 likewise in Virtual 86 Mode), the segmented addresses between 0FFFFh:0010h and 0FFFFh:0FFFFh would no longer wrap around to the linear addresses within the first 64 KiB minus 16 Bytes, and would instead form a 21-bit linear address pointing into the HMA. (This is also what A20 was about.)<br> <p> I call the memory area from zero up to below the UMA the Lower Memory Area, in symmetry with the UMA and HMA. It is also called base memory or conventional memory.<br> <p> To add to the confusion, in German MS-DOS localisations, the UMA was called hoher Speicherbereich ("high Memory-area") and then the HMA was called oberer Speicherbereich ("upper Memory-area"). That is, the terms were swapped as compared to the English terms. This I believe was fixed starting in MS-DOS 7.00 (bundled with MS Windows 95).<br> </div> Thu, 27 Feb 2020 18:21:09 +0000 An end to high memory? https://lwn.net/Articles/813439/ https://lwn.net/Articles/813439/ willy <div class="FormattedComment"> <font class="QuotedText">&gt; the kernel cannot access it without creating a temporary, single-page mapping, which is expensive</font><br> <p> As always with caches, it's not creating the mapping that's expensive. It's removing it afterwards that's the expensive part!<br> </div> Thu, 27 Feb 2020 18:11:53 +0000