Sorry, Mr. pizza ...
Sorry, Mr. pizza ...
Posted Apr 15, 2008 1:17 UTC (Tue) by JoeBuck (subscriber, #2330)In reply to: My kid hates Linux (ZDNet) by pizza
Parent article: My kid hates Linux (ZDNet)
... but I wasn't speaking theoretically. I work in electronic design automation.
The doubled-memory effect really does overwhelm the effect of having more registers, 64-bit math and a better machine architecture in many real cases, particularly when the program's working set is in the gigabytes. The time to move that data through the CPU overwhelms all other considerations. The 64-bit executable wins when the working set exceeds the 32-bit address space, of course, but in the range where the 32-bit program requires 1-2 Gbytes and the 64-bit program needs nearly double that.
For this reason, many EDA applications are available in both 32-bit and 64-bit versions, and the recommendation to the customer is to use the 32-bit version even on the 64-bit machine except where the problem is too large.
Posted Apr 15, 2008 6:20 UTC (Tue)
by motk (guest, #51120)
[Link] (1 responses)
Posted Apr 15, 2008 14:46 UTC (Tue)
by GreyWizard (guest, #1026)
[Link]
Posted Apr 15, 2008 6:36 UTC (Tue)
by bronson (subscriber, #4806)
[Link] (1 responses)
Posted Apr 17, 2008 20:34 UTC (Thu)
by jzbiciak (guest, #5246)
[Link]
Sounds like a maintenance nightmare to me, particularly if the code base is shared between 32-bit and 64-bit worlds, and if any portion of the data set has an index larger than 232-1. The reason I say "index" is that these pools could be homogeneous pools of structures, and so the addressed memory in that pool could actually be as large as 232 * sizeof(struct whatever), rather than just 232 bytes.
Sure, on 64-bit machines you get the compact representation. But, on all machines that share that code base, you add an additional indirection to compute your final pointer, and you've thrown up partitions in your memory map based on where these pools are. If your problem doesn't partition into pools nicely, you're hosed.
Posted Apr 15, 2008 13:21 UTC (Tue)
by pizza (subscriber, #46)
[Link]
Sorry, Mr. pizza ...
Counterpoint, RAM is pretty cheap these days. Just Add More.
Of course, you do come across motherboard limitations occasionally.
RAM is not the problem
CPU cache and bandwidth limitations are the issue here, not RAM size.
Sorry, Mr. pizza ...
If 64 bit pointers are really that big a deal, how come the EDA guys don't use 4GB memory
pools with 32-bit offsets? That's way you get the speed and huge memory space of 64 bits with
the space efficiency of 32 bit. Seems like a win-win.
It's been quite a while since I've done EDA (some VLSI layout and simulation back in 2003). I
remember some seriously crufty software produced by vendors who would do anything to avoid an
update. Some of the tools I used were written *and compiled* pre-1998! It was a nightmare
trying to get that junk to run. I eventually got the toolchain working and then I never let
anybody touch that box again. Not so much as a security update or a package upgrade lest it
break anything.
So... If the EDA industry is indeed pushing back against 64 bit, there might be more to it
than just pointer size inflating the working set. :)
Sorry, Mr. pizza ...
If 64 bit pointers are really that big a deal, how come the EDA guys don't use 4GB memory pools with 32-bit offsets? That's way you get the speed and huge memory space of 64 bits with the space efficiency of 32 bit. Seems like a win-win.
Sorry, Mr. pizza ...
Fair enough; your particular daily-use EDA app (proprietary? you've never actually mentioned
what it is) performs worse. You use what best supports your needs, after all.
However, my daily-use apps perform significantly better under 64-bit. That 11% improvement
with dcraw was the only one I could recreate the benchmarks on immediately, as it's trivial to
recompile.
My main daily use app (GCC cross-compiler building a multi-million line codebase) runs
considerably faster under x86_64. However, I no longer have an identical 32-bit system for
comparison any longer, so I can't supply benchmarks without blowing half a day on it. (The
64-bit gnome desktop *feels* faster too, but that's obviously subjective)
One of the folks I work with has also raved about the improvements he saw using the 64-bit
versions of the particular FPGA synthesizer tools.
Not to mention the speedup one gets by not needing bounce buffers (and other games) for I/O.