LWN.net Logo

Some Thoughts on the Current State of 64-bit Computing

February 16, 2005

This article was contributed by Ladislav Bodnar

In December last year we set out to write a series of articles evaluating Linux distributions that provide 64-bit editions of their products. We looked at the semi-official Debian Sid port, Fedora Core 3, Gentoo Linux 2004.3, Mandrakelinux 10.1, SUSE LINUX 9.2, and a development version of Ubuntu Linux "Hoary" to see how ready they were for their roles as graphical development workstations. It was an interesting journey to the world of leading edge computing. There is little doubt that the AMD64 3500+ processor we used for testing is an incredibly powerful and fast chip that is capable of completing many tasks a lot faster than any of the current 32-bit processors. And while many of the popular distributions were quick in embracing the new platform, they have done it with different degrees of success. What follows is the summary of our observations.

First, let's make one thing clear right from the start: just because you have bought or downloaded a Linux distribution designed for 64-bit processors, it does not mean that it is entirely 64-bit. In fact, the default installs of Fedora, Mandrakelinux, SUSE and Ubuntu are heavy hybrids of 32-bit and 64-bit applications and libraries. Debian provides a "pure" 64-bit system, but it also makes available a 32-bit compatibility layer for installing 32-bit applications. Gentoo is, ultimately, the most customizable of all distributions, so it's natural that one can choose between a pure 64-bit system or a mix of the two - again, through a compatibility layer.

Why is the 32-bit compatibility layer still needed? There are three reasons. Firstly, the current stable version of OpenOffice.org (1.1.x) does not compile on 64-bit processors. With its superior document conversion filters to and from MS Office, OpenOffice.org is an essential application on any workstation. And although it is expected that OpenOffice.org 2.0 will compile on 64-bit platforms, the early betas still do not, or at least, nobody has been able to build one successfully. Secondly, there are several other open source applications that do not work on 64-bit platforms; many of these are multimedia players and proprietary codecs. While these are not considered essential, the fact that they are missing from many distributions has probably contributed to the slow migration of mainstream users to Linux. Finally, there are non-free binary-only applications that many users and developers consider useful to have around: NVIDIA and ATI graphics card drivers, Acrobat Reader, Opera, Real Player, Macromedia Flash Player and perhaps a few other pieces of software. Of these, only NVIDIA and ATI have made an effort to build 64-bit editions of their drivers (the ATI driver is currently in beta testing).

Therefore, the challenge of distributions that provide 64-bit product is two-fold: they not only have to compile the Linux kernel, libraries and open source applications for the new platform (some of which might need modifications in the source code before they compile successfully), they also need to integrate 32-bit software into the system. As we've mentioned already, most distributions solve the latter challenge by providing two sets of libraries and link each application to the appropriate library. This results in substantially increased hard disk and memory requirements - not a big deal on a modern computer, but still a considerable overhead compared to any 32-bit system.

Interestingly, Debian has come up with a different approach. According to their documentation, a second system representing a minimal 32-bit Debian can be installed into a chroot-ed folder, together with all the necessary 32-bit applications. With a few scripts or aliases, the 32-bit subsystem can be integrated transparently into the main 64-bit system. We had great success with this approach. As an example, web developers will find it easy to install Opera and Flash Player into the chroot-ed subsystem and use Opera for viewing Flash-enabled web sites. Another peculiar aspect of Debian is the availability of two 64-bit branches, called "pure64" and "gcc34". The applications in the "gcc34" branch are actually compiled with a current cvs version of GCC, which will eventually become GCC 4.0 and which is said to be able to build better-optimized 64-bit binaries. We tried both branches, but we found the "gcc34" branch too unstable, with frequent crashes of XFree86.

Of the distributions we tested, the current versions of SUSE LINUX and Fedora Core turned out to be the most stable and bug-free products. Especially SUSE was a pleasant surprise in that there is a large number of third party repositories with 64-bit applications for it, and after installing apt-get, it is very easy to install just about any software one might desire. Also, the developers of SUSE have found a way to integrate the Flash plugin with Konqueror through the DCOP communication layer between the browser and the plugin. This option, however, does not work with any of the Gecko-based browsers or Opera. As for Fedora Core, it also turned out to be a very trouble-free distribution. However, we were surprised to see that third-party repositories were not as well-populated with 64-bit applications as those for SUSE. Also, between Fedora's two advanced package managers, we had good success with yum, but were unable to make apt-get work correctly.

We found both Gentoo Linux 2004.3 and the FTP edition of Mandrakelinux 10.1 more buggy than either SUSE or Fedora. This is surprising since, unlike Debian which is officially still beta, both of them were "stable releases". With Gentoo, several applications failed to compile, while Mandrakelinux had an unpolished installer with many obvious errors in it, and we had much trouble setting up sources for keeping the distribution up-to-date. Nevertheless, none of these problems were critical, and once overcome, both Gentoo and Mandrakelinux were solid and perfectly usable products. It is interesting to note that of all the 64-bit distributions on the market (besides the high-end enterprise-level offerings from Red Hat and Novell), MandrakeSoft is the only one that does not provide freely downloadable ISO images; those can be obtained either by joining the €120/year Mandrakeclub or by buying it from Mandrakestore, where it sells for €120 + shipping and handling.

As one would expect, 64-bit Linux live CDs have also started to emerge recently. Ubuntu has done a lot of work to build a fully supported live CD for 64-bit processors which will officially launch with the release of Ubuntu Linux 5.04 "Hoary", expected in April this year (beta versions are already available for download and testing). The developers of Gnoppix have also been working on a Ubuntu-based live CD for 64-bit processors and have produced several beta releases. If you prefer the KDE desktop, then the Knoppix-based KANOTIX project has recently produced a very interesting live CD for 64-bit processors with some bleeding-edge hardware detection modules. There is also Knoppix64, but this project has been dormant since its first official release last June. Interestingly, there are, as yet, no RPM-based live CDs for 64-bit platforms.

Finally, if you are in the market for a new computer, should you get one with a 64-bit processor? And once you have it, should you install a 32-bit or a 64-bit distribution? The answer to the first question is a resounding "yes" - AMD64 is a great processor with a large range of excellent inexpensive motherboards now available for it. As for the second question, the answer is a "maybe", but probably closer to a "no" for most users. Let's be honest about it, the speed difference between a 32-bit and 64-bit operating system is marginal at best, but all of the current 64-bit Linux distributions add a layer of complexity by having to provide compatibility mechanisms for those applications that have not been ported to 64-bit systems. This extra complexity is probably not worth the hassle. That said, there are cases where the 64-bit processor has considerable advantages: on systems with large databases that require enormous amounts of memory, on machines used frequently for encoding huge media files, or those designed for heavy web serving with data compression or other intensive tasks.

And of course, there are those of us who simply can't resist the temptation to be on the bleeding edge of hardware and software development, and who feel that running a 32-bit operating system on a 64-bit processor is just plain silly....


(Log in to post comments)

Some Thoughts on the Current State of 64-bit Computing

Posted Feb 17, 2005 2:15 UTC (Thu) by dhess (subscriber, #7827) [Link]

Of these, only NVIDIA and ATI have made an effort to build 64-bit editions of their drivers (the ATI driver is currently in beta testing).
I guess this is a nitpick, but I think the inclusion of OpenGL drivers in this paragraph about 32-bit compatibility is misleading for a few reasons. First of all, I think that Nvidia has done more than "made an effort" to provide an x86-64 version of their driver; the driver is quite fast, fully-featured (wrt the x86 version), and very stable. Second, you can't use x86 kernel modules with an x86-64 kernel anyway, so there's no compatibility option for drivers as there is for the applications that are mentioned.

Anyway, just wanted readers to know that if you're concerned about switching to x86-64 because of the quality of Nvidia's OpenGL driver, you shouldn't be. You can even run x86 OpenGL apps with full hardware acceleration; the Nvidia x86-64 driver supports the 32-bit x86 ioctls.

I can't speak for ATI's driver, as I haven't used it.

And sorry, one last nitpick -- this article should be titled, "Some Thoughts on the Current State of GNU/Linux on x86-64." There are lots of other 64-bit architectures supported by GNU/Linux that aren't covered by this article.

64-bit Computing: YES if you have lots of RAM

Posted Feb 17, 2005 2:59 UTC (Thu) by zmi (guest, #4829) [Link]

One important advantage of 64bit is memory - 64bit Linux has no problems
with 1GB, 4GB or 64GB RAM. For the 32bit version, you need to recompile
when you upgrade RAM.

Currently, a good workstation uses 1-4GB of RAM, and that level will
increase - so 64bit is good for now, and superior later.

Some Thoughts on the Current State of 64-bit Computing

Posted Feb 17, 2005 8:24 UTC (Thu) by sveinrn (guest, #2827) [Link]

>..., but all of the current 64-bit Linux distributions add a layer of complexity by having to provide compatibility mechanisms for those applications that have not been ported to 64-bit systems. This extra complexity is probably not worth the hassle.

I am using 64-bit versions of FC3 and SuSE 9.2. With FC1 and maybe FC2 there were extra hassle, but now, as an end-user, I don't notice the extra complexity, and I don't think this is an argument for going 32-bit.

The only argument left, is that there are still few plugins for64-bit Firefox, but that will likely change in the near future, at least after Microsoft releases their 64-bit XP. And the more people are demanding 64-bit versions, the faster Macromedia and others will produce it.

Some Thoughts on the Current State of 64-bit Computing

Posted Feb 17, 2005 10:06 UTC (Thu) by james (subscriber, #1325) [Link]

Also, between Fedora's two advanced package managers, we had good success with yum, but were unable to make apt-get work correctly.

Known bug: apt does not understand Fedora-style bi-arch distributions (specifically, it does not understand that there should be two packages with the same name and version: e.g. glibc for i686 and x64-64).

See http://www.linuxtx.org/amd64faq.html, for example.

apt is not currently part of either Fedora Core for x86-64 or Fedora Extras for x86-64.

James.

Some Thoughts on the Current State of 64-bit Computing

Posted Feb 17, 2005 10:33 UTC (Thu) by shane (subscriber, #3335) [Link]

And once you have it, should you install a 32-bit or a 64-bit distribution?

One set of users who definitely should go with 64-bit are database users - at least MySQL users.

MySQL running in 64-bit mode runs much, much faster (like twice as fast) as in 32-bit mode for many operations.

Some Thoughts on the Current State of 64-bit Computing

Posted Feb 17, 2005 12:57 UTC (Thu) by mwh (guest, #582) [Link]

I'm not sure I like the equation "64-bit Computing" == "x86-64". There *are* other 64 bit chips, you know...

Some Thoughts on the Current State of 64-bit Computing

Posted Feb 17, 2005 13:02 UTC (Thu) by Kluge (guest, #2881) [Link]

"Let's be honest about it, the speed difference between a 32-bit and 64-bit operating system is marginal at best..."

It's my understanding that 64-bit systems can be slower in principle b/c the 64-bit values take up more room, especially in cache.

However, it's also my understanding that in extending x86 to 64 bits, AMD has somewhat alleviated the deficiencies of x86 by, for instance, providing more registers. So the speed difference of properly optimized x86 and x86-64 binaries *could* be significant.

Is this correct?

Some Thoughts on the Current State of 64-bit Computing

Posted Feb 17, 2005 15:41 UTC (Thu) by Duncan (guest, #6647) [Link]

Yes, the speed difference can be and often is "significant". Your
understanding on both counts is correct.

64-bit values taking up more space (and bandwidth, don't forget memory and
disk bandwidth) combined with the fact that 32-bits is "big enough" for
many tasks, weighs toward the 32-bit side, as does compatibility with
slaveryware as well as not-yet-ported freedomware.

OTOH, weighed toward the 64-bit side are a number of factors as well. You
mentioned the register thing. x86 as an arch has been considered a poor
32-bit implementation for many years, because of the relative scarcity of
full CPU speed (no look-up time lost direct assembly language reference)
registers. AMD64/x86_64 is far more competitive with other 64-bit archs
in this regard.

However, likely one of the biggest performance positives for AMD64 is
often overlooked entirely. Because it's a new platform at its "base"
level in terms of CPU instruction sets, unlike x86
(386/486/686/mmx/sse/sse2/etc) applications precompiled for the platform
are generally decently optimized for it. This applies equally well to
freedomware but normally binary distributions, and
slaveryware*/proprietaryware from the likes of MS. How many people are
running x86 binary platforms (libre or not) optimized for the lowest
common denominator, i386, i586, i686 if one is lucky, because it's not so
practical to deliver all those options and more (latest SSE optimized,
etc) in a multitude of binary packages? With AMD64, it's early enough in
the history of the platform that simply being compiled for AMD64/x86_64
means there's a closer match between the capacities of the hardware, and
what the software is compiled to take advantage of, yet by now late enough
that at least for those packages compiled with gcc 3.4 and later, there's
significant native optimization for the platform. No more running i386 or
i586 rpms because that's what's the distrib has chosen as its lowest
common reference platform target!

(Of course, I should note that I'm running Gentoo and compile everything
from source with my own choice of optimizations, so the above binary
advantage doesn't apply here. However, I switched from Mandrake, which
used i586, and found RH's i386 targeting even more curious -- well over a
decade after the chip became obsolete. Yes, I'm aware of Debian and in
fact would have probably switched to it if Gentoo hadn't been available,
but I haven't the foggiest what their 32-bit binaries target, so I had to
stick with rpms in the above.)

Of course, the flatter memory implementation of 64-bit tends to
decomplicate things as well, and that will become an ever greater issue as
>= 1GB desktop memory becomes ever more common, but that doesn't become a
HUGE issue until >4GB, which is still some time off for most desktop use,
anyway.

I'd certainly recommend going 64-bit with the software as well, for most
folks wishing to stick to freedomware. For those willing to compromise
their freedom in the interest of pragmatism, it really depends on what
apps they regularly use. If most of them are 64-bit, going 64-bit will
likely be the better option. Those running particular 32-bit
applications, or those targetting widest 32-bit compatibility in general,
however, as would be the case for heavy slaveware game players who've
chosen to let the game authors be their master and choose their platform
for them, will unsurprisingly find that running their AMD64s as 32-bit
processors is naturally the closest compatibility possible, since in that
case, for the software anyway, it really /is/ 32-bit, not just 32-bit
compatible.

.....
* Slaveryware: A reference to the quote I have as my USENET sig:
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman in
http://www.linuxdevcenter.com/pub/a/linux/2004/12/22/rms_...

I found the first page of that interview very thought provoking, even
where I didn't agree with it. Obviously, however, I agree with the above
strongly enough to have adopted the terminology for my own.

Duncan

Some Thoughts on the Current State of 64-bit Computing

Posted Feb 17, 2005 16:40 UTC (Thu) by elanthis (guest, #6227) [Link]

"How many people are
running x86 binary platforms (libre or not) optimized for the lowest
common denominator, i386, i586, i686 if one is lucky, because it's not so
practical to deliver all those options and more (latest SSE optimized,
etc) in a multitude of binary packages?"

Fortunately, it doesn't matter if you compile for i386 or not, because almost every piece of software that needs SSE or the like does the detection at *run-time*. Simply compiling for a newer version of an architecture isn't going to make your apps run any faster; heck, depending on exactly what the app does and which compiler you use, it might even end up slower. You can't automatically turn some random bit of code into an SSE-using speed demon.

For the few apps and libs were the recompilation *can* make a difference, at least some "lowest common denominator" distros provide multiple architectures of those packages, Debian and Fedora included.

Fedora and optimisation Current State of 64-bit Computing

Posted Feb 18, 2005 3:01 UTC (Fri) by gdt (subscriber, #6284) [Link]

Although FC3 contains "i386" packages which only use 80386 instructions the choice and order of those instructions is optimised for the Pentium 4.

The problem with the tactic of providing i686 versions of packages is that the demonimation no longer discriminates between the processor classes of interest: Pentium II, Pentium III/Pentium M and Pentium 4.

The tactic also fails for machines with a targetted use. It might be acceptable not to get the utmost out of Apache or Samba when they are used on a desktop, but it is not so acceptable when Apache is used as a web server or Samba is used as a file server.

Some Thoughts on theCurrent State of 64-bit Computing

Posted Feb 18, 2005 7:36 UTC (Fri) by nix (subscriber, #2304) [Link]

You can't automatically turn some random bit of code into an SSE-using speed demon.
-mfpmath=sse? :)

(Of course, this breaks the i386 ABI, so is really an argument in favour of your point...)

Some Thoughts on the Current State of 64-bit Computing

Posted Feb 17, 2005 21:57 UTC (Thu) by dan_b (subscriber, #22105) [Link]

With its superior document conversion filters to and from MS Office, OpenOffice.org is an essential application on any workstation

Speak for yourself. My workstation seems to be working quite nicely without this apparently essential component.

Some Thoughts on the Current State of 64-bit Computing

Posted Feb 19, 2005 3:57 UTC (Sat) by flewellyn (subscriber, #5047) [Link]

If it doesn't open in Emacs, it's not worth reading!

(Besides, if you do have to open a word file for some reason, you can use the very good wvWare tools from wvware.sourceforge.net. Command-line tools that convert MSWord files into lots of nifty things!)

Some Thoughts on the Current State of 64-bit Computing

Posted Feb 20, 2005 19:16 UTC (Sun) by bk (guest, #25617) [Link]

IIRC, Abiword uses wv/wvware internally for it's Word/RTF filters.

The ATI driver isn't beta

Posted Feb 18, 2005 6:14 UTC (Fri) by dberkholz (subscriber, #23346) [Link]

"(the ATI driver is currently in beta testing)"

I'm not sure where this information came from, because it was released roughly a month ago from today.

Some Thoughts on the Current State of 64-bit Computing

Posted Feb 18, 2005 17:29 UTC (Fri) by arafel (subscriber, #18557) [Link]

<i>[...] should you get one with a 64-bit processor? The answer [...] is a resounding "yes"[..]. Let's be honest about it, the speed difference between a 32-bit and 64-bit operating system is marginal at best</i>

Is it just me, or are there some contradictions in that? It's basically saying "Get one because they're cheap enough now, but it won't actually make much difference". Gee, thanks. :-)

Some Thoughts on the Current State of 64-bit Computing

Posted Feb 19, 2005 0:41 UTC (Sat) by giraffedata (subscriber, #1954) [Link]

Is it just me, or are there some contradictions in that?

There's no contradiction. The first statement compares 64 bit CPUs and 32 bit CPUs (actually, with context, x86-64 and x86) and the second statement compares 64 bit operating systems and 32 bit operating systems.

It implies that an x86-64 with 32 bit Linux is substantially better than x86 with 32 bit Linux, while x86-64 with 64 bit Linux is not better than x86-64 with 32 bit Linux.

Inline i386 assembly

Posted Feb 19, 2005 15:39 UTC (Sat) by stock (guest, #5849) [Link]

The big hurdle needed to take in order to move to a 64bit clean source code tree of a Linux distro is as mentioned the binary only applications like graphics drivers, audio and video codecs and things like flash plugins. Besides that numerous applications carry some inline i386 assembly inside its source code. For a example see the kwave project on kwave.sourceforge.net and my contribution to that on http://crashrecovery.org/mp3-ripkit/kwave/mdk101/ . Mostly assembly commands are used to "probe" for CPU capabilities :
         
diff -u -r kwave-0.7.2/libkwave/cputest.c          
kwave-0.7.2.amd64/libkwave/cputest.c          
--- kwave-0.7.2/libkwave/cputest.c	2004-12-30 04:09:35.000000000          
+0100          
+++ kwave-0.7.2.amd64/libkwave/cputest.c	2004-12-30          
04:31:16.791294000 +0100          
@@ -14,6 +14,14 @@          
  *   cleanly within this new environment          
  *   by Thomas Eschenbacher (Thomas.Eschenbacher@gmx.de)          
  *   had to include config.h and cputest.h          
+ *          
+ * 2004-12-30          
+ *   As is stated on http://www.tortall.net/projects/yasm/wiki/AMD64 :          
+ *   "Instructions that modify the stack (push, pop, call, ret, enter,          
+ *    and leave) are implicitly 64-bit. Their 32-bit counterparts are    
+ *    not available, but their 16-bit counterparts are."          
+ *   Adjusted the failing popl %0 commands when compiling on                
+ *   AMD64/X86_64 by Robert M. Stockmann (stock@stokkie.net)          
  */          
           
 #ifdef HAVE_CONFIG_H          
@@ -47,7 +55,11 @@          
                           /* See if CPUID instruction is supported ...          
*/          
                           /* ... Get copies of EFLAGS into eax and ecx          
*/          
                           "pushf\n\t"          
+#if defined(__x86_64__) || defined(__amd64__)          
+			  "popq %%rax\n\t"          
+#else          
                           "popl %0\n\t"          
+#endif          
                           "movl %0, %1\n\t"          
           
                           /* ... Toggle the ID bit in one copy and store          
*/          
@@ -58,7 +70,11 @@          
           
                           /* ... Get the (hopefully modified) EFLAGS */          
                           "pushf\n\t"          
-                          "popl %0\n\t"          
+#if defined(__x86_64__) || defined(__amd64__)          
+			  "popq %%rax\n\t"          
+#else          
+			  "popl %0\n\t"          
+#endif          
                           : "=a" (eax), "=c" (ecx)          
                           :          
                           : "cc"          
          
As one can see the devil is inside the dirty details. Then again, things like assembly optimizations on CPU's like AMD64 should never exceed finding CPU capabilities IMHO, as this iron is blinding fast as it is, one just needs to find its capabilties. e.g. Mplayer also has its complications with some inline assembly c-code. But again these problems can be solved.

Just remember that projects like openssl also stopped buggering about its optimized i386 assembly c-code in Intel EMT64 and AMD64 cpu's. The real problem currently is indeed inside things like 64bit drivers for Video cards and 3rd party applications like Adobe Acrobat reader. But that also should be a matter of time.

Some Thoughts on the Current State of 64-bit Computing

Posted Feb 24, 2005 14:36 UTC (Thu) by anton (guest, #25547) [Link]

"the default installs of Fedora, Mandrakelinux, SUSE and Ubuntu are heavy hybrids of 32-bit and 64-bit applications and libraries."

In the Usenet posting <2004Aug8.070043@mips.complang.tuwien.ac.at> I counted the number of x86-64 and other ELF files in /bin and /usr/bin of Fedora Core 1 (with "Everything" installed).

The result was 726 x86-64 binaries and 77 others; the non-x86-64 binaries for a large part seemed to have to do with the Berkeley DB; there were also a number of 32-bit compilers which were provided as 32-bit binaries, as well as a few other files. In the almost full year while I used this distribution, I never used any of these applications.

However, I did use some of the binaries I had compiled earlier, so the 32-bit libraries did come in handy.

Some Thoughts on the Current State of 64-bit Computing

Posted Feb 25, 2005 22:15 UTC (Fri) by other-iain (guest, #2810) [Link]

I remember being at SGI when we introduced a 64b O/S. We also put a bunch of goodies in the CPUs in an attempt to mitigate the performance hit of moving 8 bytes instead of 4 for every pointer.

For a while, we supported two kinds of binaries, named for the compiler flags which selected the type:
-o32: this was the old 32-bit binaries
-n64: new 64-bit binaries

There was a lot of discussion internally about whether we should enable the 64b goodies (extra registers primarily, but also some extra instructions too) without forcing 64b pointers. In the end we did it:
-n32: new 32-bit binaries

The extra registers added real performance gains for numerous customers, and n32 ended up being what most programs compiled with. Eventually, the 64b model was still valuable for the killer apps that people bought the machine for (database, crash sims, supernova sims, nuke sims), but nearly everything else was compiled -n32. It was the right pragmatic balance, and the cost was having to support three binary types instead of two.

I think Linux is going to do an -n32 as well, eventually. I wish they'd just get on with it. Until Linux n32 is available, the 64b width, extra registers and extra instructions on the Athlon64 just won't make any difference to most people, which is a shame.

Copyright © 2005, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds