Pettenò: Debunking x32 myths
Posted Jun 25, 2012 21:48 UTC (Mon) by JoeBuck
In reply to: Pettenò: Debunking x32 myths
Parent article: Pettenò: Debunking x32 myths
To give an example from one field (though one that matters very much to the proponents of this API, I suspect): the current state of affairs is that it is common for EDA users to use both a 32-bit and a 64-bit version of the same application (simulator, model checker, synthesis tool, etc) on x86-64 architecture, preferring the former if the problem size will fit into 32 bits. That's because these applications are pointer-heavy, data access heavy and limited by what will fit into physical memory (it's not just the cache size, though of course that is a factor as well). The result is often that the 32-bit version is faster, at least for a certain range of problem size that is rather common, because it takes less memory. The 64-bit version is often reserved for test cases that require more than 4GB, because the performance of a server farm running lots of simulations is often constrained by how many jobs will fit into physical memory.
With x32 there would still be two different versions of the executable. For that reason, the relevant comparison is between the x32 version and a traditional x86 32-bit version. There are a number of wins: the larger register set means much less penalty for PIC code; 64 bit operations are available, there is much less register-spilling code. I would still recommend a 64-bit kernel, but I find x32 to be very interesting and don't think that the author of this piece really understands why the proponents' employer might be investing heavily in this.
to post comments)