|
|
Log in / Subscribe / Register

My kid hates Linux (ZDNet)

My kid hates Linux (ZDNet)

Posted Apr 14, 2008 18:05 UTC (Mon) by andikleen (guest, #39006)
Parent article: My kid hates Linux (ZDNet)

[I will probably get flamed for this, but someone has to say it.]

The problem is really that Debian has been unable for many years to do proper 32bit emulation
support in user space. The kernel does it fine, but 
the distro needs to be proper multilib which is not really rocket
science as many other distributions like RedHat, Mandrake, SUSE
and others have demonstrated. 

But for some reason Debian seems to be unable to do multilib properly.
And Ubuntu seems to be unable to do such a change without Debian doing it for them first. Some
other distributions (like Slackware) also seem
to be unable to do this, but fortunately they are not widely used.

My only comment is: the person is using the wrong distribution.

And Debian/Ubuntu is giving the whole x86-64 setup a bad name by still
not getting this right after so many years. x86-64 can be really
near 100% (ok let's say 99+% compatible) at the user land level to i386
and the kernel implements that fine.  Just the distribution needs
to do its (small) part too by supplying the proper libraries.

Disclaimer: I worked on the x86-64 32bit emulation, that is why I'm
a little biased. But it's pretty annoying that one's work goes to waste
just because some popular distributions cannot get some relatively
simple infrastructure changes done to use it properly. I also used
to work for a company doing one of the distributions mentioned above,
but it's really a generic comment.


to post comments

My kid hates Linux (ZDNet)

Posted Apr 14, 2008 18:09 UTC (Mon) by JoeBuck (guest, #2330) [Link] (3 responses)

The Fedora/Red Hat approach, with dual packages for everything, generally works but does have its problems; certainly there's a bloat issue and it's easy to pull in 32-bit libraries you don't really need. But the Debian/Ubuntu approach is crippled, making it harder to run 32-bit code.

My kid hates Linux (ZDNet)

Posted Apr 14, 2008 18:42 UTC (Mon) by andikleen (guest, #39006) [Link]

Disk space is cheap and executable libraries tend to not be that big anyways compared to other
data like graphics etc.. There is no real reason why you can't just install all libraries in
32bit and 64bit versions.

My kid hates Linux (ZDNet)

Posted Apr 15, 2008 6:17 UTC (Tue) by motk (subscriber, #51120) [Link] (1 responses)

Oh noes! Bloat! A hundred meg of my 500G drive, gone! 

SHAKE ANGRY FIST AT GOD, SCREAM NOOOOO

My kid hates Linux (ZDNet)

Posted Apr 15, 2008 6:49 UTC (Tue) by jengelh (subscriber, #33263) [Link]

100 MB is a good estimate. openfire and VMware do not even have any rpm package requirements.
$ (rpm --qf="%{SIZE}\t%{NAME}\n" -qa "*32bit*"; rpm --qf="%{SIZE}\t%{NAME}@%{ARCH}\n" -qa | grep @i.86) | grep -Pv 'openfire|VMware' | perl -aF'\t' -lne 'END{print$x}$x+=$F[0]'
117166556

All distributions do it wrong (for someone)..

Posted Apr 14, 2008 18:19 UTC (Mon) by smoogen (subscriber, #97) [Link] (2 responses)

I am a Fedora/CentOS user.. and have dealt with my share of 64/32 bit problems with that
versions of lib handling. The issue is that it works great for some things and absolutely
horrible for others. But the core problem is not the distro they person chose, but that they
didn't ask their 'user' what they wanted. Finding out that they use X,Y, and Z sites and need
ActiveX or Itunes is going to kill the move right there. Finding out that they use Flash means
you will want to make sure that 64bit/32bit works before you start. 

I say this from the bitter experience of changing my parents computers over to Linux and
finding that nothing they did regularly was supported. That made the box secure, but useless. 

64 bits overkill?

Posted Apr 14, 2008 23:05 UTC (Mon) by man_ls (guest, #15091) [Link] (1 responses)

Stupid question: why install a 64-bit version for your parents? My mother's laptop runs fine with 32 bits.

64 bits overkill?

Posted Apr 14, 2008 23:23 UTC (Mon) by smoogen (subscriber, #97) [Link]

I think 64 bits is overkill. In my case, I just installed the latest 'desktop' friendly 32 bit
Linux at the time.. which turned out to be not desktop friendly enough. They could not use
their bank, they could not get into some .gov sites (this was while adobe was dithering on
upgrading flash to linux), the grandkids could not play the website games they wanted.. etc.
The questions are the same though.. and would have told me that a 64 bit was not appropriate
and that their main websites were not supported.

My kid hates Linux (ZDNet)

Posted Apr 14, 2008 18:21 UTC (Mon) by BenHutchings (subscriber, #37955) [Link] (18 responses)

The reason is that dpkg assumes a single host architecture and identifies installed packages
uniquely by name, not by an (arch, name) or (arch, name, version) tuple. As a kluge, Debian
and Ubuntu provide selected i386 libraries for amd64 in the ia32-libs package. Since this is
mostly useful for running non-free software that can't simply be recompiled for amd64, there
aren't many Debian developers interested in improving it. But there is some ongoing work on
multiarch support in dpkg.

My kid hates Linux (ZDNet)

Posted Apr 14, 2008 19:03 UTC (Mon) by JoeBuck (guest, #2330) [Link] (17 responses)

It's short-sighted to believe that the only reason for wanting to run a 32-bit executable on a 64-bit platform is to run non-free code. For a program that must manipulate a huge pointer-heavy data structure, such as an electronic design automation program or mechanical CAD application, the 64-bit version needs nearly 2x the memory, and a 32-bit app whose data fits in memory beats a 64-bit app that page-faults a lot, by a factor of 10 or more.

My kid hates Linux (ZDNet)

Posted Apr 14, 2008 19:34 UTC (Mon) by andikleen (guest, #39006) [Link]

Also you might not just want recompile your old software, even if it's free. 

Why should you be forced to when both the CPU and the kernel have no problem at all still
executing it fine at basically native speed. I often
copy over old binaries from other systems that I compiled ages ago.
Why should I redo that work?

This "32bit compat is only for non free" software excuse really doesn't
make much sense.

My kid hates Linux (ZDNet)

Posted Apr 14, 2008 19:42 UTC (Mon) by pizza (subscriber, #46) [Link] (8 responses)

For "generic" 64 vs 32-bit comparisons, I'd agree, but we're talking about x86 and x86_64
here, and the latter has many many architectural improvements over the former (such as double
the number of registers).  As a result, most software tends to run slightly faster due to less
register contention, despite the additional overhead of larger pointers.  Additionally, if
you're doing lots of 64-bit math (as CAD/EDA is wont to do), things get *much* faster.

And yes, I've benchmarked this for myself.  The one thing that's trivial for me to recompile
now (dcraw) gives me an 11% improvement when built as a 64-bit binary.

My kid hates Linux (ZDNet)

Posted Apr 14, 2008 23:47 UTC (Mon) by djabsolut (guest, #12799) [Link] (1 responses)

Not to disparage the generally good idea of moving towards x86_64, however an improvement of 11% is not really worth the hassle of incompatabilities. What are the speedups like on average ?

(AFAIK, modern processors "translate" the crufty x86_32 code into their own internal code, and along with a large cache this makes issues such as lack of registers not really a problem. The only practical reason one would want to use x86_64 is larger available memory space and/or 64 bit math -- the number of applications needing this is dwarfed by plain-jane applications).

My kid hates Linux (ZDNet)

Posted Apr 15, 2008 3:44 UTC (Tue) by jwb (guest, #15467) [Link]

There are major differences with x86_64 that show up everywhere, not just for math.  The
calling convention on the 64-bit system are far cleaner.  More arguments can be passed to
functions in the registers (6, I think) than on 32-bit systems, where the extra arguments have
to be placed on the stack.  Stack management function on x86 are not free; they take one or a
few cycles during every function call and function return.  This can add up.

x86_64 also allows more and better ways of addressing data that can save an explicit load to
register.

These are not theoretical improvements.  Lots of programs run much better on x86_64 than on
plain old x86.

The 64-bit systems do still have the problem of larger pointers which can crowd the cache, but
some programmers find ways around this.  BEA, for example, uses short heap pointers in their
JVM, which gives them all the speedups of the x86_64 programming model (described above)
without paying the cost of 64-bit pointers.

Sorry, Mr. pizza ...

Posted Apr 15, 2008 1:17 UTC (Tue) by JoeBuck (guest, #2330) [Link] (5 responses)

... but I wasn't speaking theoretically. I work in electronic design automation.

The doubled-memory effect really does overwhelm the effect of having more registers, 64-bit math and a better machine architecture in many real cases, particularly when the program's working set is in the gigabytes. The time to move that data through the CPU overwhelms all other considerations. The 64-bit executable wins when the working set exceeds the 32-bit address space, of course, but in the range where the 32-bit program requires 1-2 Gbytes and the 64-bit program needs nearly double that.

For this reason, many EDA applications are available in both 32-bit and 64-bit versions, and the recommendation to the customer is to use the 32-bit version even on the 64-bit machine except where the problem is too large.

Sorry, Mr. pizza ...

Posted Apr 15, 2008 6:20 UTC (Tue) by motk (subscriber, #51120) [Link] (1 responses)

Counterpoint, RAM is pretty cheap these days. Just Add More.

Of course, you do come across motherboard limitations occasionally.

RAM is not the problem

Posted Apr 15, 2008 14:46 UTC (Tue) by GreyWizard (guest, #1026) [Link]

CPU cache and bandwidth limitations are the issue here, not RAM size.

Sorry, Mr. pizza ...

Posted Apr 15, 2008 6:36 UTC (Tue) by bronson (guest, #4806) [Link] (1 responses)

If 64 bit pointers are really that big a deal, how come the EDA guys don't use 4GB memory
pools with 32-bit offsets?  That's way you get the speed and huge memory space of 64 bits with
the space efficiency of 32 bit.  Seems like a win-win.

It's been quite a while since I've done EDA (some VLSI layout and simulation back in 2003).  I
remember some seriously crufty software produced by vendors who would do anything to avoid an
update.  Some of the tools I used were written *and compiled* pre-1998!  It was a nightmare
trying to get that junk to run.  I eventually got the toolchain working and then I never let
anybody touch that box again.  Not so much as a security update or a package upgrade lest it
break anything.

So...  If the EDA industry is indeed pushing back against 64 bit, there might be more to it
than just pointer size inflating the working set.  :)

Sorry, Mr. pizza ...

Posted Apr 17, 2008 20:34 UTC (Thu) by jzbiciak (guest, #5246) [Link]

If 64 bit pointers are really that big a deal, how come the EDA guys don't use 4GB memory pools with 32-bit offsets? That's way you get the speed and huge memory space of 64 bits with the space efficiency of 32 bit. Seems like a win-win.

Sounds like a maintenance nightmare to me, particularly if the code base is shared between 32-bit and 64-bit worlds, and if any portion of the data set has an index larger than 232-1. The reason I say "index" is that these pools could be homogeneous pools of structures, and so the addressed memory in that pool could actually be as large as 232 * sizeof(struct whatever), rather than just 232 bytes.

Sure, on 64-bit machines you get the compact representation. But, on all machines that share that code base, you add an additional indirection to compute your final pointer, and you've thrown up partitions in your memory map based on where these pools are. If your problem doesn't partition into pools nicely, you're hosed.

Sorry, Mr. pizza ...

Posted Apr 15, 2008 13:21 UTC (Tue) by pizza (subscriber, #46) [Link]

Fair enough; your particular daily-use EDA app (proprietary?  you've never actually mentioned
what it is) performs worse.  You use what best supports your needs, after all.

However, my daily-use apps perform significantly better under 64-bit.  That 11% improvement
with dcraw was the only one I could recreate the benchmarks on immediately, as it's trivial to
recompile.

My main daily use app (GCC cross-compiler building a multi-million line codebase) runs
considerably faster under x86_64.  However, I no longer have an identical 32-bit system for
comparison any longer, so I can't supply benchmarks without blowing half a day on it.  (The
64-bit gnome desktop *feels* faster too, but that's obviously subjective)

One of the folks I work with has also raved about the improvements he saw using the 64-bit
versions of the particular FPGA synthesizer tools.  

Not to mention the speedup one gets by not needing bounce buffers (and other games) for I/O.

64-bit performance

Posted Apr 14, 2008 21:57 UTC (Mon) by epa (subscriber, #39769) [Link] (6 responses)

Do you have any documented cases of x86_64 code running slower than i386 or needing more
memory?  After all 32-bit ints are still available on x86_64.  Some code might use twice the
memory when pointers are twice as big, but you only really care about memory usage for
memory-heavy apps, and those are the exact ones where you really want a 64-bit system to allow
more than 4Gibyte addressable memory for each app.

64-bit performance

Posted Apr 15, 2008 4:39 UTC (Tue) by JoeBuck (guest, #2330) [Link] (2 responses)

Take a box with 2 Gb of memory. Run a program that requires a 1.5 Gb working set to avoid paging with 32 bit code, where the in-memory structure is heavy with pointers. Now recompile with -m64. Voila, it now needs maybe 2.5 Gb to avoid paging. You might well see the 64 bit code run 100 times slower.

We run workloads like that all the time.

64-bit performance

Posted Apr 15, 2008 6:08 UTC (Tue) by bronson (guest, #4806) [Link] (1 responses)

I suppose it's true that if you use an artificial 2.5 GB dataset and impose a 2 GB memory
limit, 32 bit would be faster.

In the real world, why wouldn't you just spend $50 for a 2GB memory upgrade?  Then the 64 bit
box would fly.  If you're not convinced, let's try this exercise again with a hypothetical 3.2
GB data set.  :)

In my experience, modern 64 bit boxes with stuffed with lots of ram are really cheap and
really damn fast.  I can't think of any reason to deploy 32 bit for servers/HPC these days.

64-bit performance

Posted Apr 15, 2008 18:38 UTC (Tue) by JoeBuck (guest, #2330) [Link]

"If you're not convinced, let's try this exercise again with a hypothetical 3.2 GB data set."

You've just answered your own question. Now you can run a problem that requires a 3.2GB working set with your 32 bit executable quickly (if you can squeeze it into the 4GB address space, and you'll need what Red Hat used to call the hugemem patch to make it work), but it takes maybe 5GB with the 64 bit executable, and the 32 bit version runs quicker.

Of course, you need the 64 bit executable when the problem size exceeds 4GB. The point is, it is useful for the developer to provide the user with both executables, to run on an operating system that can run both.

64-bit performance

Posted Apr 15, 2008 7:32 UTC (Tue) by laf0rge (subscriber, #6469) [Link] (2 responses)

The entire linux networking stack, and especially netfilter/iptables with connection tracking
is running 10-15% slower on x86_64 than on i386 kernels.

The main reason being that all pointers are suddenly twice as large, and thus most data
structures need at least one more cache line, resulting in significantly less of the working
set being present in cache, increasing cache misses, etc.

I think any code that has a lot of pointers in data structures should see the same effect.

64-bit performance

Posted Apr 15, 2008 19:39 UTC (Tue) by bronson (guest, #4806) [Link] (1 responses)

That's very interesting.  Has anybody tried working around this?

One solution would be convert hot 64-bit ptr fields to 32-bit offsets pointing into a single
memory pool.  I'm not familiar enough with the networking code to know how traumatic this
would be.  (I'm definitely not saying do this everywhere; just where it really matters).

This topic might make a fairly fascinating paper.  :)

64-bit performance

Posted Apr 18, 2008 6:07 UTC (Fri) by alankila (guest, #47141) [Link]

We should have pointers of a size intermediate between 32 and 64 bits, let's say 40-bit
pointers. The point being that the it'd be large enough to address the RAM necessary but
doesn't waste so much space.

I really don't think we'll ever grow to the point where we'll use all of the 64-bit pointer
address space, and pointers with top 20-30 bits unused are just wasted space.

Too bad that the whole world thinks in 2^n.

My kid hates Linux (ZDNet)

Posted Apr 14, 2008 18:48 UTC (Mon) by madscientist (subscriber, #16861) [Link]

I agree with Andi, and I've been using and loving Debian and now Ubuntu for 10 years or so
(fled from Red Hat 5.2 or similar and never looked back).  I don't care about flash and other
proprietary tools, but I'm trying to maintain an embedded development environment with both
native and cross-compilation tools, that should be able to run on both 32bit and 64bit
systems.

On the Red Hat systems, it just works.  On the Ubuntu systems, it requires a lot of tweaking
and poking and messing around to get the 64bit systems to run properly.

Fair disclosure: the entire environment was developed on Red Hat Enterprise Linux 4, so
there's some inherent bias there.  Also, I haven't had the time to figure out how to get
things working properly on Ubuntu 64bit; I've just got some emails from some people describing
what they needed to do, which was pretty involved and not clean at all.  But maybe if I had
time to concentrate on it I could figure out something better.

However, the main problem seems to be that RH implements multilib as described in the LSB and
elsewhere, and Debian/Ubuntu does their 32/64bit interop a very different, and incompatible,
way.  It would be nice if Debian/Ubuntu could get on board with the LSB definition in this
situation so that we could create a portable environment.

Correction

Posted Apr 14, 2008 19:56 UTC (Mon) by andikleen (guest, #39006) [Link] (2 responses)

I mentioned Slackware as one set of distributions that do not do proper multilib. I was told
now that at least one of the 64bit variants of slackware (slamd64) does proper FHS compliant
multilib, which I wasn't aware of. I apologize for the misrepresentation.

Correction

Posted Apr 14, 2008 22:32 UTC (Mon) by cathectic (guest, #40543) [Link] (1 responses)

This is not correct either - there is no official Slackware x86-64 port; 
the only official Slackware is the 32 bit x86 build, or the Slack/390 
port.

Anything else is unofficial and not Slackware (although we do try very 
hard to be Slack like). So technically, Slackware doesn't do multilib (and 
Pat Volkerding has stated in interviews in the past that any such official 
port would likely not be multilib anyway, so you would be right on that 
front); but then again, Slackware doesn't do x86-64 either at the moment 
either.

(I should state for the record here that I am a Slamd64 contributor).

Correction

Posted Apr 16, 2008 22:11 UTC (Wed) by pr1268 (guest, #24648) [Link]

Well, I think Andikleen is partially correct: While not supported by Slackware's own development team (i.e. Patrick Volkerding et al), the Slamd64 distribution is otherwise a 64-bit clone of Slackware.

Debian *is* working on multiarch

Posted Apr 28, 2008 14:24 UTC (Mon) by jasonspiro (guest, #38047) [Link]

Debian is working on getting multiarch working. Hopefully it will be fully implemented by the time Debian etch+1 comes out.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds