|
|
Subscribe / Log in / New account

The x32 subarchitecture may be removed

The x32 subarchitecture may be removed

Posted Dec 12, 2018 18:34 UTC (Wed) by josh (subscriber, #17465)
In reply to: The x32 subarchitecture may be removed by sorokin
Parent article: The x32 subarchitecture may be removed

You don't have to keep all data in one big array. You could allocate memory using your own custom allocator and know that all pointers are in the same 32-bit range. (mmap with MAP_32BIT would make that relatively easy.)

It's not necessarily a good idea, but then, most applications shouldn't be doing it.


to post comments

The x32 subarchitecture may be removed

Posted Dec 12, 2018 18:53 UTC (Wed) by sorokin (guest, #88478) [Link] (7 responses)

> You don't have to keep all data in one big array. You could allocate memory using your own custom allocator and know that all pointers are in the same 32-bit range. (mmap with MAP_32BIT would make that relatively easy.)
> It's not necessarily a good idea, but then, most applications shouldn't be doing it.

Definitely not a good idea. Consider one has a pointer-heavy application and one wants to move it to 32-bit pointers. He has to replace pointers (T* with something like lowmem_ptr<T>), smart pointers (unique_ptr<T> with lowmem_unique_ptr<T>), smart point factories (make_unique<T> with lowmem_make_unique<T>), containers (does specifying lowmem_allocator work here?), providing lowmem_malloc (right?).

This process is invasive and it is not revertible easily. One can say "Still this is possible". Yes it is possible, but it is so complicated that no one will do this. That why I said "In theory yes". In practice it means no.

Compiling for x32 abi is a much cleaner solution.

The x32 subarchitecture may be removed

Posted Dec 12, 2018 23:37 UTC (Wed) by linuxrocks123 (subscriber, #34648) [Link] (5 responses)

I'd think probably the way to do it would be at the compiler and glibc level. It seems like the compiler should be able to make pointers 32-bit on its own, and glibc could be made to "thunk" the system calls by zero-extending all the pointers.

I'm guessing there might be something I'm missing here, because, otherwise, I don't know why they wouldn't have done it that way to begin with.

The x32 subarchitecture may be removed

Posted Dec 13, 2018 5:56 UTC (Thu) by eru (subscriber, #2753) [Link] (4 responses)

It seems like the compiler should be able to make pointers 32-bit on its own, [...]

Sounds like the "memory models" of MS-DOS and 16-bit Windows C compilers. You can certainly make it work, but you must make sure that all modules are compiled with the same memory model option, and any precompiled libraries you link against are for the same memory model as your code, or you need extensions or #pragmas to define "far" function prototypes. I'm not sure it is worth it. In MS-DOS this caused all sorts of fun, but was necessary for allowing C programs access more than 64k of memory. For x32_64 there is no such compelling reason.

The x32 subarchitecture may be removed

Posted Dec 13, 2018 7:26 UTC (Thu) by ibukanov (subscriber, #3942) [Link] (2 responses)

In MS-DOS and Windows 3.* there were mostly no need to annotate pointers with __near and __far keywords. One configured the compiler to have data/code pointers as necessary, like 32 bit for data and 16 bit for code and that was it. The far/near keywords were only necessary when one wanted to optimize things. For example with 32 bit for code it was occasionally useful to optimize function pointers to be 16 bit if one new that functions, for example, came from the same source and the code span was less than 64K. Another problem was 16 bit data and 32 bit code. Then one could not store a functional pointer in void* and that had to be annotated as void __far*.

The x32 subarchitecture may be removed

Posted Dec 13, 2018 13:15 UTC (Thu) by eru (subscriber, #2753) [Link] (1 responses)

In MS-DOS and Windows 3.* there were mostly no need to annotate pointers with __near and __far keywords.

True, when you compiled all your code with the same memory model options. But you did need them for external libraries (that may or may not use the same memory model), and low-level code. (And of course for optimizations, as you noted). As I recall, the Microsoft compiler had four memory models for all combinations of near/far function and data pointers. (Some compilers had even a fifth, "huge", that permitted arrays larger than 64k). We obviously could reach the same number of models with 32 vs 64 data and function pointers, and relive the memory model mess-ups of the 1980's.... To quote one of the posts above, "Don't go there".

The x32 subarchitecture may be removed

Posted Dec 13, 2018 14:38 UTC (Thu) by ibukanov (subscriber, #3942) [Link]

Borland-C++ supported the huge memory model. And I really do not remember that far/near was an issue even with external code as those were coming compiled for all relevant models.

A memory model with 32-bit code and 64-bit data can be useful. We are still far away from 4GB executables even when accounting for JIT.

The x32 subarchitecture may be removed

Posted Dec 13, 2018 16:26 UTC (Thu) by nybble41 (subscriber, #55106) [Link]

> you must make sure that all modules are compiled with the same memory model option, and any precompiled libraries you link against are for the same memory model as your code

But that's already true for x32 code. The ABI proposed by linuxrocks123 (32-bit pointers but x86_64 system calls) would be similar to x32 but implemented entirely in userspace, with pointer size translation at the user <-> kernel boundary.

The x32 subarchitecture may be removed

Posted Jan 6, 2019 18:01 UTC (Sun) by jwakely (subscriber, #60262) [Link]

> containers (does specifying lowmem_allocator work here?)

Yes, in theory. In practice not all of GCC's std:: containers do the right thing yet, see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=57272


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds