The Grumpy Editor goes 64-bit
By the early 1980's, 32-bit systems had taken over much of the computing world. And, with certain exceptions, 32 bits has been the way of things for a good two decades. Processor speeds have gone up by three orders of magnitude, as have disk sizes; main memory has grown by even a bit more. But most systems sold today still use 32-bit words and addresses. The fact is, 32 bits suffice for almost every quantity we need to manipulate with computers. The exception, increasingly, is memory. We have hit the point where we are running out of address space. The need to work with ever more memory to run our increasingly bloated applications will eventually push much of the industry over to 64-bit processors.
Your editor decided to be ahead of the curve, for once. So he ordered up a new motherboard and Athlon64 processor. Before the process was done, he also ended up buying a new video card, power supply, and disk drive. In fact, the only original component left in the case (a holdover from when LWN thought it might be a training company) is the diskette drive. But, the new system is now up and running, and your editor has had a chance to get a feel for what the 64-bit world has to offer.
The hardest question, perhaps, was the choice of distribution to run. The new system replaces a Debian unstable box, so Debian was the obvious first choice. The state of the Debian x86_64 port is a little discouraging, however. Installation requires starting with the basic x86 distribution, coming up with 64-bit versions of gcc and glibc, building a new 64-bit kernel, booting that, and piecing together the rest of the system with the other x86_64 packages that have become available. More than ten years ago, your editor converted, by hand, his first Linux box from a.out to ELF binaries; installing Debian x86_64 looks like a similar process. Somehow, what looked like an interesting and instructive adventure in the early 1990's is distinctly less appealing now.
MandrakeSoft and SUSE both offer x86_64 versions of their distributions. The Gentoo port seems to be coming along reasonably well, but some time spent digging through the Gentoo package database shows that much of the software base still lacks x86_64 support. Your editor, in the end, went with the Fedora Core 2 test 2 release, at least for now. FC2t2 gives good visibility into the development process (as do Mandrake and Gentoo), a familiar, Red Hat core, and the ability to play around with some bleeding-edge features like SELinux. It also is designed around the 2.6 kernel, which is an important feature.
When one leaves the x86 mainstream, it does not take long to realize that the well-trodden pathways have been left behind. Mirrors for the x86_64 architecture are relatively scarce and often behind the times. Most applications do not, yet, come prebuilt for this architecture. Documentation on how to get x86_64 systems up and running is minimal. It is all a bit of an adventure.
That said, the FC2t2 distribution works well - as well as could be expected on any architecture for a development release. And the really nice thing about the x86_64 architecture is that most 32-bit x86 binaries work just fine, as long as you have 32-bit versions of the relevant libraries around. That fact alone makes the transition to this architecture relatively easy.
The need for 32-bit libraries complicates system administration, however. An x86_64 Fedora system has many duplicated packages installed, and working with rpm can, occasionally, be a bit confusing. The rpm interface was not, perhaps, designed for dealing with a world where two packages have the same name and version number, but are still distinct. Unless you plan to leave the 32-bit world behind entirely, however, you will need two versions of the libraries. Chances are that most x86_64 systems will want to run 32-bit binaries for some time - in some cases, they perform better, and, in any case, some programs in FC2t2 (e.g. OpenOffice.org) are still built that way.
Building applications can also be a bit of a challenge, at least a first. Quite a few makefiles and configure scripts assume that libraries live in /usr/lib. On a Fedora system, /usr/lib has the 32-bit versions of the libraries; the native versions live in /usr/lib64. A makefile which uses the default gcc (which compiles in 64-bit mode) and tries to explicitly link against things in /usr/lib will fail. Once you learn to recognize this problem, it gets easy to fix.
Your editor was naturally interested in performance issues. To that end, he built a version of bzip2 in both 64-bit and 32-bit mode and compared the results. Both compression and decompression ran about 10% faster in the 64-bit mode. With the x86_64 processor, better performance is generally expected in the native mode, mainly due to the additional registers which are available. The executable size and memory usage in 64-bit mode were larger, but not by much. A second test, using the SoundTouch library yielded a surprise, however: changing the tempo of a large sound file ran in less than 1/5 the time in 32-bit mode. The Athlon64 processor, it would seem, runs certain operations far more slowly in 64-bit mode; your editor has not, yet, had the time to track this one down.
Despite the paucity of mirrors, the glitches, and the surprises, the x86_64
platform makes for a very nice Linux system. The kernel support for this
architecture is outstanding, the performance is good, and the expanded
address space renders concepts like "high memory" obsolete. After all,
we'll never need more memory than can be addressed with 64 bits...
Seriously, however, this architecture has helped to realize one of the
great promises of Linux: a freedom of choice in hardware as well as
software. 64-bit systems are now available at a price even an LWN editor
can afford. This editor, who just shifted his old Pentium 450 box over to
sacrificial kernel testing duty, is distinctly less grumpy.
Posted Apr 15, 2004 1:32 UTC (Thu)
by gabriel (guest, #51)
[Link]
Posted Apr 15, 2004 1:46 UTC (Thu)
by vblum (guest, #1151)
[Link] (1 responses)
Posted Apr 15, 2004 8:55 UTC (Thu)
by zmower (subscriber, #3005)
[Link]
Other than that it automatically detected just about everything it could (apart from my printer which needs a later version of gimp-print; works in debian unstable). It's very polished and is probably just like the x86 version. I last had SuSe 6.4 installed so its been a while since I last used it but everything's familiar, only better. I'll definitely be buying 9.1 for the 2.6 kernel (included in 9.0 but only experimental) and printer support. 64 bit software's not really the reason I chose 64 bits. Like the ed, I wanted to be ahead of the curve and I upgraded from a 800Mhz celeron on a BX board. I don't see me getting a new computer for a loooonnnng time.
Posted Apr 15, 2004 1:54 UTC (Thu)
by freeio (guest, #9622)
[Link] (1 responses)
This is being written on the currently cheapest of the 64-bit boxes, a 333 MHz Sun Ultra 5, which is running Aurora Linux 1.0 (a port based on RH 7.3 with later kernels and patches). The 64-bit code is mostly limited to kernel space, though, because it adds so little utility on a system which maxes out at 1GB of memory. When I rebuilt apache on our 270 MHz Ultra 5, I compiled it for 64-bit, and it works just fine. The capability to compile 64-bit applications is there, but really there is no advantage for most applications and most users. If one had a Sun 64-bit box that could hold more memory, then perhaps it would be of more beneift. So hurrah for the 64-bit i86 family processors, which will bring 64-bit processing to the masses. And hurrah for the mainstream 64-bit Linux distributions, which are available _today_. None of this niche stuff anymore! Marty
Posted Apr 16, 2004 4:46 UTC (Fri)
by JoeBuck (subscriber, #2330)
[Link]
The Linux kernel is only a tiny fraction of the code in a typical distribution. Free software has been addressing 64-bit portability long before Linus got around to it (both in the GNU and BSD camps), which is why Linus had a good-quality 64-bit compiler for the Alpha when he started the port.
Posted Apr 15, 2004 2:15 UTC (Thu)
by tjc (guest, #137)
[Link] (4 responses)
It seems that 32-bit processors with 36 or 40-bit address buses could prolong the life of 32-bit (instruction) processors for some time. When it comes to computers, I'm always an advocate of doing things in a minimalist sort of way. :-)
Posted Apr 15, 2004 2:20 UTC (Thu)
by corbet (editor, #1)
[Link] (2 responses)
That, of course, is exactly what modern Pentium processors are. You can put 64GB of memory into such a thing, but that road leads to all the high memory hassles that the kernel hackers have been having so much fun dealing with. Addressing more physical memory can be (and is) done; the real problem is that you need larger virtual addresses. And that really forces a larger word size or your pointer arithmetic slows to a crawl.
Unless, of course, you want to get back into multiple memory models and "near" and "far" addresses. Personally, I don't miss those days at all.
Posted Apr 15, 2004 2:38 UTC (Thu)
by tjc (guest, #137)
[Link] (1 responses)
Nor do I, and to be honest, I'm not really "up" on the details of memory management in Linux. But it seems to me that going to a 64-bit address space isn't free either. For one thing, if you use hierarchial paging, either the page table is going to be huge, or else you're going to end up with about 6 levels of indirection, which has got to be slow too. My best guess is that they hash the page table somehow? Anyway, I'll let you see how it goes, and I'll move to 64-bit when it's safe - say in 3 or 4 years. :-)
Posted Apr 15, 2004 4:07 UTC (Thu)
by smoogen (subscriber, #97)
[Link]
Posted Apr 15, 2004 2:28 UTC (Thu)
by dlang (guest, #313)
[Link]
whose who have been around long enough to remember EMS memory on the 8088 machines will find the Intel PAE memory concept very familiar, and there is a GOOD reason why nobody uses EMS memory anymore
Posted Apr 15, 2004 3:19 UTC (Thu)
by arget (guest, #5929)
[Link]
I am reminded again why I enjoy paying for the privelege of LWN each week. It's the first-hand experience, the erudite critique, and the passion that drives it. I hope Mr. Corbett continues to find better things than Xanex to spend money on... :-)
Posted Apr 15, 2004 5:03 UTC (Thu)
by jamesh (guest, #1159)
[Link]
The web page for the SoundTouch library lists the following as one of its features: Is it possible that the speed difference you encountered was due to the 32-bit version using these optimisations and the 64-bit version not?
Posted Apr 15, 2004 7:57 UTC (Thu)
by dale77 (guest, #1490)
[Link]
Your dispatches from the wide open spaces of 64-bit ness make me want to kiss my Pentium 133 goodbye and join you... Soon, very soon.
Posted Apr 15, 2004 8:57 UTC (Thu)
by alspnost (guest, #2763)
[Link]
What I'm really looking forward to is games that properly make use of the 64-bit stuff; the UT2004 demo already performs well on an Athlon XP 2500+ system, but I'd love to see a full 64-bit build of that screaming along....
Posted Apr 15, 2004 12:23 UTC (Thu)
by gavino (guest, #16214)
[Link]
I can't wait to play UT2006(?) 64-bit native and over IPv6! (don't forget that other '6' IPv6) :)
Posted Apr 15, 2004 13:53 UTC (Thu)
by zooko (guest, #2589)
[Link] (11 responses)
2^64 is 18,446,744,073,709,551,616. That's 17,179,869,184 gigabytes. And what would you store in there? Let me put it another way -- what's your max bandwidth? How many bytes of data can you meaningfully produce or consume in a second? Let's say that you will someday produce and/or consume 1 GB per second, and you want to store all of this stuff for possible later use. Then this device that you have can store enough of this data for 544 year's worth of uninterrupted, non-repeating production/consumption. Maybe corporations, AIs, or neo-humans will have use for such amounts of memory, but I don't think that humans will!
Posted Apr 15, 2004 17:59 UTC (Thu)
by fjf33 (guest, #5768)
[Link] (4 responses)
Posted Apr 16, 2004 21:31 UTC (Fri)
by giraffedata (guest, #1954)
[Link] (3 responses)
I'm always suspicious of these statements. Throughout history, people have been smart and imaginative.
Though I was not talking to people about memory sizes in those days, I'm sure anyone who thought about it seriously realized we would need 1MB of memory in a computer.
They quite possibly didn't think we'd ever be able to get it, but that's a whole different discussion.
I guess we could put it another way: Nobody thought in the future we would need 20 bit addresses. (Because we'd never be able to get 1MB of memory behind a single CPU).
Posted Apr 20, 2004 16:47 UTC (Tue)
by zooko (guest, #2589)
[Link] (2 responses)
Posted Apr 23, 2004 0:03 UTC (Fri)
by chant (guest, #20286)
[Link] (1 responses)
Or....on another order of magnitude altogether, the binaries for Windows U[niverse]E. :)
Posted Apr 27, 2004 21:01 UTC (Tue)
by slamb (guest, #1070)
[Link]
I've seen discussions of this number before, but I don't remember the result. But I seem to
recall it being less than 2^128; thus, a 128-bit address space is good enough for anyone. Really.
There are other arguments based on thermodynamics limiting the complexity of a computer.
Google for discussions of thermodynamics and reversible vs. irreversible computers and you'll
see what I mean.
Posted Apr 15, 2004 18:26 UTC (Thu)
by tjc (guest, #137)
[Link] (1 responses)
And just think how long it would take for the POST to count all that! :-) You'd need a pretty fast CPU too..
Posted Apr 16, 2004 3:29 UTC (Fri)
by wolfrider (guest, #3105)
[Link]
Posted Apr 16, 2004 21:24 UTC (Fri)
by giraffedata (guest, #1954)
[Link]
I do, but the question you've answered is really, will we someday need more than 2^64 bytes of memory in a computer like the ones we have today?
You've made an excellent argument that the answer to that question is no.
On the other hand, I'm sure some day computers will be structured differently, and we will most likely be addressing memory larger than what fits on a motherboard, or even in one building. There is definitely more than 2^64 bytes of information to be stored in the world. We'll struggle for a while with all the usual stopgaps -- paging, multilevel addressing, and such. But eventually we'll break down and get more than 64 bits of address space and go through a transition much like the 32/64 one.
Posted Jan 18, 2011 10:24 UTC (Tue)
by stealthpaladin (guest, #72424)
[Link] (1 responses)
Some time around 2009 Google CEO estimates 500 million terabytes of data attributed to the internet.
So by now, we could probably use a 2^64 computer to hold 2.x backups of the internet =P Which could have it's uses =P
as goes 2^128 and this idea
Well there are ways to take a few values (say 8 for example) and based on these achieve further resolution than 8 (say 256 for example =P) Really there are many techniques which provide more resolution for processing and storage than their components, so I think identifying the whole universe may not be so impossible really.
I also think of the whole,... leaf in a pond thing. There's an idea that if you can *fully* define and understand one particle on the leaf then you can access the info of the entire pond. Perhaps with 2^128 we could define one proton so well that everything else is just implied lol.
Posted Jan 18, 2011 10:28 UTC (Tue)
by stealthpaladin (guest, #72424)
[Link]
Posted Feb 7, 2018 10:00 UTC (Wed)
by rbanffy (guest, #103898)
[Link]
I can probably think of other clever uses for special-purpose computers, but I need to get back to my actual job - deadlines are unforgiving.
Posted Apr 15, 2004 14:06 UTC (Thu)
by evgeny (subscriber, #774)
[Link] (2 responses)
So, when the time came to installing Linux on my new shiny dual-opteron _workstation_, I hardly hesitated. Well, I haven't been disappointed. But there were issues specific to the workstation install. Many apps are masked, indeed, but most of them are rather "not unmasked" since no one yet reported success. Masks can be easily overriden on the command line (and in the case of success, everyone's encouraged to report to the Gentoo's bugzilla, which usually results in an almost-instant unmasking of the package in the CVS). So now I'm running an almost-pure full-blown 64bit desktop (OO being the only 32bit exception). Mozilla is 64 bits, so the only thing missing is flash in web pages (those who need it badly can run 32bit mozilla/firefox/... builds). For me, it quickly added to the reasons why one should hate closed-source software, and I'm constantly updating my black list of sites whose owners hire brainless web designers putting a flash as the entry page navigation (and these "designers" deserve to be put in the hell forced to program under Windows 2.0 on a 640k AT). The only issue I spent a lot of time was the NVidia (propriety) driver - which resulted in swapping the FX5200 thingy with a Radeon 9200 card supported by XFree out of the box. OpenSource scores one more. (Can't resist temptetation to share my experience with the so much worshipped NVidia support. It seems, all they're doing is accurately redirecting bug reports to /dev/null with a few occasional comments on the BB forums of the kind "be happy Nvidia bothers with you filthy linuxists at all".) Wow, quite a lengthy comment... Hope it helps others.
Posted Apr 15, 2004 20:51 UTC (Thu)
by BillyBreen (guest, #3988)
[Link]
Posted Apr 15, 2004 20:56 UTC (Thu)
by komarek (guest, #7295)
[Link]
We are stuck on kernel 2.4 because we can't get rw AFS for kernel 2.6. Please, someone, replace AFS! It won't die on its own. -Paul Komarek
Posted Apr 15, 2004 14:41 UTC (Thu)
by evgeny (subscriber, #774)
[Link] (5 responses)
As far as the amd64 arch is considered, one should realize that its direct competitor (before Intel announced its "own" x86_64), P4/Xeon4 is nothing but braindamaged multimedia-oriented castrated PIII. The only runner-up is Centrino but finding a SMP Centrino box is not a trivial task ;-). Plus, GHz of AMD are higher. So, even compiled for x86_32 (if e.g. one is bound to a commercial 32bit-only compiler) computational-intensive sci apps should significantly outperform when run on amd64.
Posted Apr 15, 2004 16:24 UTC (Thu)
by ewan (guest, #5533)
[Link] (2 responses)
Posted Apr 15, 2004 16:35 UTC (Thu)
by evgeny (subscriber, #774)
[Link] (1 responses)
Posted Apr 17, 2004 5:20 UTC (Sat)
by ksmathers (guest, #2353)
[Link]
Certainly I2 is improved, but not that much. A disadvantage of EPIC is that if your compiler isn't really good at optimizing for the instruction set, the CPU will spend most of its time idle. Itanium performance depends very heavily on being able to keep the instruction lines full, and since there is no intelligence in the CPU at all, it is up to the compiler to accomplish that parallelism.
Itanium can beat any other CPU out there, but without the parallelism afforded by an extremely customized compiler backend, it can also feel like molasses.
Posted Apr 17, 2004 22:06 UTC (Sat)
by dvdeug (guest, #10998)
[Link] (1 responses)
Posted Apr 27, 2004 0:34 UTC (Tue)
by barrygould (guest, #4774)
[Link]
A 32-bit binary won't be able to use them.
Posted Apr 17, 2004 3:40 UTC (Sat)
by RichardRudell (subscriber, #7747)
[Link] (4 responses)
Redhat Enterprise Linux 3 was my choice and it works flawlessly. Pricey, yes, but all of the features needed for compute-intensive applications which need more than 32-bits. Robust, solid, and supported by the third-party software I need. I notice that LWN has never done a piece on how RHEL3, which is a 2.4 kernel, has all of the most useful features of the 2.6 kernel backported to a solid and stable platform. If you need me to list what they are, perhaps a detailed piece analyzing the RHEL3 kernel is more in order ... [I have no relationship to RedHat other than being a satisfied customer]. BTW: it's not 64-bit addressing. Hypertransport has a physical limitation of 40-bits. Yeah, I couldn't believe it either until I read the specs. 1 TB is a nice improvement over 4 GB, but it's not the 2^64 talked about in an earlier post. Forget virtual address space (currently 48-bits on Opteron), I'm talking about physical address space. There is a hardwired limit which is going to hurt sooner than later. Worse, because "int" is still 32-bit, we should really call this the "36-bit" machine, i.e., many applications will break because they count the "number of things" using an "int"; if the "thing" is 32-bytes, the "int" overflows at 2^36 bytes. That's only 16 GB which is too near current reality for my liking. Also, 64-bit mode beats 32-bit mode on Opteron only for "small memory" applications. "Big memory" applications, i.e., why you want 64-bits in the first place, still lose by 5-20%. See, the problem is pointers being 64-bits reduces the effective size of the L1/L2 cache and cause increased traffic to DDR. Typical applications show a 1.6x increase in memory space (each struct being a mix of pointers (64-bit) and ints (32-bit). The extra 8 registers and the new ABI (pass arguments in registers, no frame pointer) help, but you just can't overcome a 60% increase in memory traffic when you really need to use all that memory. Sincerely,
Posted Apr 17, 2004 4:02 UTC (Sat)
by RichardRudell (subscriber, #7747)
[Link]
I would like to add that my field is Computer-Aided Design for Integrated Circuit Design which has been fighting the 4 GB memory barrier for the last 5 years. First came 3 GB on Linux, then 3.5 GB, then 4 GB with the "HUGEMEM" Linux kernel patches (flush TLB on every syscall). The only other solutions have been Sparc-64 and IA64, which have been, hmm, less than price/performance competitive.
Posted Apr 18, 2004 5:39 UTC (Sun)
by evgeny (subscriber, #774)
[Link] (1 responses)
I didn't get it at all. Nor the 32->36 step neither (more importantly) the connection between sizeof(int) and addressable memory - unless you're talking about braindamaged apps casting pointers to ints of some kind. > Also, 64-bit mode beats 32-bit mode on Opteron only for "small memory" By "big memory" you probably assume not big enough to hit the 2/3/4GB (depending on kernel patch) native limit of x86. Anything above should clearly favor 64bit. > Typical applications show a 1.6x increase in memory space (each struct Uh? A typical app spends half of its allocation in pointers?! There are such apps certainly, but they are 1) not memory-hungry AND 2) don't need a high CPU<->RAM bandwidth. Anything really memory-intensive should use a contigous memory areas (arrays or some custom mappings) in the first place, in which case the overhead is negligible. The only exception is probably memory debuggers (BTW, one thing I miss badly for x86_64 is valgrind).
Posted Apr 18, 2004 15:21 UTC (Sun)
by RichardRudell (subscriber, #7747)
[Link]
I'm not talking about "brain-damaged" apps which mix int and pointer. I'm talking about a different type of brain-damage -- using an "int" to count, for example, the number of transistors on a chip. This "int" overflows at 2^31 meaning that, regardless of the fact that I can now address 1 TB, I'm going to be hit by a different type of problem before I reach that limit. This is a different problem from the "make applications 64-bit clean". It is to "upgrade" applications so they use 64-bit ints when necessary. Re: "uh"; I'm speaking from experience as a user of IC design tools which are (1) memory hungry (4GB-12GB for a single process is not atypical); (2) need high CPU/RAM bandwidth (all of that memory is aggresively used); and (3) show a 60% increase in virtual memory when running in 64-bit mode. I am not attempting to present these as "typical" applications; just that they are important to me ... I anxiously await Intel's entry into the field with IA32e. The ensuing competition will be great for everyone, both in increasing performance and lowering price.
Posted Apr 27, 2004 21:05 UTC (Tue)
by slamb (guest, #1070)
[Link]
That's not so bad. They left room in the instructions to address the full 64 bits. They'll be
able to replace the hardware with something that supports the full 64 without rewriting the
compilers and recompiling all the software.
IIRC, the first "32-bit" Intel machines couldn't actually address 4GB (or even 2GB) of memory,
either. But it didn't really cause a problem; by the time people actually wanted to do that, they'd
bought new processors. I expect this will be the same situation.
Posted Apr 24, 2004 20:48 UTC (Sat)
by jmayer (guest, #595)
[Link]
When I built my Opteron box late last year, I decided to start with the Fedora Core 1 sources and cross-compile it myself. It was very educational and ultimately successful. But I really didn't like having /lib and /lib64; /usr/lib and /usr/lib64 etc. I'd rather have legacy libraries go in /lib32, and native 64 bit libraries go in the standard locations. I ended up compiling a pure 64-bit system with regular library paths. I'm not terribly concerned if this program or that loses 5% performance; I just wanted a 64-bit system I could put 16GB of ram in without running into the problems a 32-bit linux box has when you load it with that much memory.
lib64
it's an embarrassingly blunt question, to say the least ... but, how would this obvious question
(moderately encouraging) experience compare to a Mandrake or SUSE 64-bit distribution ...
any ideas on that?
I've had some problems booting with USB, specificly if USB Keyboard or Mouse are selected in the BIOS or I leave my MMC card in the reader. Note to self, must write it up.Suse 64 Professional experience
We can be thankful that some years ago, John "Maddog" Hall convinced DEC to provide an Alpha box to Linus, so many 64-bit issues were solved way back then. Because there have been a continuous stream of (less popular) 64-bit processors which run Linux, the 64-bit code base has matured quite nicely.64-bit new - 64-bit old
64-bit new - 64-bit old
The fact is, 32 bits suffice for almost every quantity we need to manipulate with computers. The exception, increasingly, is memory. We have hit the point where we are running out of address space.The Grumpy Editor goes 64-bit
The Grumpy Editor goes 64-bit
"It seems that 32-bit processors with 36 or 40-bit address buses could prolong the life of 32-bit (instruction) processors for some time."
Unless, of course, you want to get back into multiple memory models and "near" and "far" addresses. Personally, I don't miss those days at all.The Grumpy Editor goes 64-bit
Its not really the memory management of the Linux box as much as the memory management of the PC architecture itself. There are limits to where a PCI bus, BIOS, and other things can be 'hacked' to working and each time more memory is extended it gets more hackish than a 'clean-minimalist fix'
The Grumpy Editor goes 64-bit
the Opteron 64 bit approach is actually simpler to use for both new and legacy software then the 36 or 40 bit memory extentions that Intel is providing, and it performs better as well.The Grumpy Editor goes 64-bit
A shameless compliment, I can't resist.The Grumpy Editor Rocks!
The Grumpy Editor goes 64-bit
Additional assembler-level and Intel-MMX instruction set optimizations for Intel x86 compatible processors (Win32 & Linux platforms), offering several times increase in the processing performance.
Hurrah! Another brave soul willing to risk the lure of the new world!The Grumpy Editor goes 64-bit
Nice one Jon - congrats and I hope you enjoy your brave new world. I'm longing to try out an AMD64 system, but it will probably have to wait a bit longer. I think I'll go the mega-laptop route - something about having portable 64-bit power appeals to me! But they're still rather pricey, and as you discovered, Gentoo (my distro of choice) isn't quite fully there on the AMD64 support yet.The Grumpy Editor goes 64-bit
Both x86 64-bit computing (AMD64 and i32e) and IPv6 becoming mainstream is what I'm craving for.The Grumpy Editor goes 64-bit
Do you think that we will someday need more memory than can be addressed with 64 bits?2^64 bytes is enough for any human
Suppose you had some kind of memory storage device which could store 1 gigabyte in a 1 millimeter by 1 millimeter square. Then your whole system would occupy a field 130 meters by 130 meters. Or you could go 3-D -- suppose you have a 1 GB device 1 millimeter by 1 millimeter by 1 millimeter. Then you would need a volume 2.5 meters by 2.5 meters by 2.5 meters to hold enough of these devices to store 2^64 bytes.
Back when people were storing bits in induction coils nobody though that we would need 1 MB of ram and the future computeres were the size of planets. What makes you think that the rate of growth would slow any? It sounds like a couple of Mols of memory cells would do the trick pretty well in a very small space. It just will be different. It may also be that future MMOGs will be simulating things so close to reality that 2^64 would be reasonable.
2^64 bytes is enough for any human
Back when people were storing bits in induction coils nobody thought...
2^64 bytes is enough for any human
At some point, the "we'll always need more" belief has to be wrong. For example: do you think we'll need more than 2^128 memory? I don't. How many atoms of matter are there in the universe?
2^64 bytes is enough for any human
First, imagine some sort of 3-d movie.2^64 bytes is enough for any human
Then, an interactive 3-d movie.
Then consider a large-scale 3-d movie; i.e. modelling what happens to each of those atoms in the universe over time.
Or, perhaps, modelling the little tiny components (and sub-components ad nauseum) of even one of those atoms. The possibilities are endless.... and perhaps, storing the 'feel' & phyiscs reacions of physical textures, the bonds between each of those atoms, etc.
The point is not whether or not it would be useful to have an accurate model of the
universe, but that it is impossible. Clearly you can't do better than (or even as well as)
harnessing the largest set of good quantum numbers of every particle in the universe. (Your
computer has to be in the universe!) So by examining the number of particles in the
universe, you can come up with a number that is larger than the maximum possible storage
space of a computer. Take the base-2 log and you've got an
inexhaustible address space.
2^64 bytes is enough for any human
Suppose you had some kind of memory storage device which could store 1 gigabyte in a 1 millimeter by 1 millimeter square. Then your whole system would occupy a field 130 meters by 130 meters. Or you could go 3-D -- suppose you have a 1 GB device 1 millimeter by 1 millimeter by 1 millimeter.2^64 bytes is enough for any human
--To blazes with the POST; you'd basically have to check that much memory on-the-fly (before malloc returns) and/or in the background.2^64 bytes is enough for any human
Do you think that we will someday need more memory than can be addressed with 64 bits?
2^64 bytes is enough for any human
2^64 bytes is enough for any human
"Clearly you can't do better than (or even as well as) harnessing the largest set of good quantum numbers of every particle in the universe. (Your computer has to be in the universe!) "
2^64 bytes is enough for any human
2^64 bytes is enough for any human
Nice article. I, too, was faced by the question which distro to choose for the first dual opteron I ordered for our lab. At that time (a few months ago), among "well known" (to me) distros only SUSE officially supported amd64, RH was at beta, but both would cost considerable $$. Not that I'm against commercial distros, but when kinda _forced_ to pay for Linux, I felt offended ;-). Debian/amd64 was even at a scarier state than now (yep, I have an a.out->elf conversion experience as well, and, too, with years such an entertainment seems less attracting...), so I decided to give Gentoo a try. It was at beta then, as well. Surprisingly (it was my first Gentoo experience), the installation was reasonably smooth (and, I dare say, reasonably fast, due to the power of the system). Not without some rough edges, of course. A lot of apps/utilities were masked so I struggled quite a bit with emerge. On the other hand, the box was intended for numbercrunching tasks, so not much beyond base system was needed. Once configured, it runs nicely, updates are easy etc. And Gentoo/amd64 doesn't have the dreadful /lib64 issue.The Grumpy Editor goes 64-bit
I second the Gentoo endorsement, though I've been using Gentoo on various AMD and Pentium x86 boxes for the past 3 years. I built my first Athlon 64 box a few weeks ago, and I've been very happy with the Gentoo x86_64 experience thus far. I've emerged a number of 'masked' applications and they've all worked perfectly for me.
The Grumpy Editor goes 64-bit
Our lab started using Gentoo for our ageing Alphas. When we moved to Opterons, we kept using Gentoo. Things have gone relatively well for us. We have no major gripes with Gentoo, but of course there's the occasional grumble. We're using pure 64 bit right now, which isn't a problem since we build all of our research own code from source.The Grumpy Editor goes 64-bit
One more thing. Regarding the 64- vs 32-bit performance. Of course, one shouldn't expect big difference for 32-bit apps (I mean those manipulating 32-bit integers and floats) - not without some genuine magic inside processor and/or compiler at least. But with long longs and doubles, the difference should be impressive in most cases. My own numbercrunching codes (using doubles) jump exactly twice in productivity when going to 64 bits. Or, compare user time of md5sum ran over a large (e.g. an ISO image) file.The Grumpy Editor goes 64-bit
For floating point number crunching an Itanium is a more obvious The Grumpy Editor goes 64-bit
competitor than a Xeon and will confortably outperform an Opteron
even though running at a lower clock rate.
Itanium is in the different price domain. As to performance, I triedThe Grumpy Editor goes 64-bit
Itanium-1 and wasn't impressed. Probably Itanium-2 is different.
As to performance, I tried
Itanium-1 and wasn't impressed. Probably Itanium-2 is different.
The Grumpy Editor goes 64-bit
The AMD64 doubles the number of general purpose registers. That may cause a big difference for some 32-bit apps.
The Grumpy Editor goes 64-bit
You must compile your apps into 64-bit binaries in order to get access to the extra registers.The Grumpy Editor goes 64-bit
Wow. Lots of feedback on what should now be an old topic (-:64-bit?
Richard Rudell
Oops. 2^36 is 64 GB. But you can tweak the numbers however you please, make the "thing" 128-bytes instead of 32, and the result is the same. Still too close for comfort.64-bit?
> Worse, because "int" is still 32-bit, we should really call this the 64-bit?
> "36-bit" machine, i.e., many applications will break because they count
> the "number of things" using an "int"; if the "thing" is 32-bytes, the
> "int" overflows at 2^36 bytes. That's only 16 GB which is too near current
> reality for my liking.
> applications. "Big memory" applications, i.e., why you want 64-bits in
> the first place, still lose by 5-20%. See, the problem is pointers being
> 64-bits reduces the effective size of the L1/L2 cache and cause increased
> traffic to DDR.
> being a mix of pointers (64-bit) and ints (32-bit). The extra 8 registers
> and the new ABI (pass arguments in registers, no frame pointer) help, but
> you just can't overcome a 60% increase in memory traffic when you really
> need to use all that memory.
Sorry, I should have been more clear.64-bit?
BTW: it's not 64-bit addressing. Hypertransport has a physical limitation of 40-bits. Yeah, I
couldn't believe it either until I read the specs. 1 TB is a nice improvement over 4 GB, but it's not
the 2^64 talked about in an earlier post. Forget virtual address space (currently 48-bits on
Opteron), I'm talking about physical address space. There is a hardwired limit which is going to
hurt sooner than later.
64-bit?
Have you found any time to look into this topic by now? SoundTouch surprise
Joerg
PS: If lwn had warned me that my subscribtion was about to expire, I
would have payed before my holiday. This way you've lost about 1 month of
subscribtion fee.