Itanium lost for the same reason Pentium Pro lost
Posted Oct 19, 2008 9:08 UTC (Sun) by khim
In reply to: Easy? I think not.
Parent article: Linux now an equal Flash player (Linux-Watch)
"Itanic" indeed, that's exactly my point about "no clue". You correctly identify that there's a transition process, and the Itanium made no allowance for such a process, it seemed as though Intel hadn't even considered that the software then running on their x86 line would be expected to somehow transition onto this weird new architecture.
Itanium had support for such a process. But it's 32-bit performance was abysmal. Intel's engineers said that this time the transition process doesn't have to take decades and concentrated solely on 64-bit side - and of course they were wrong.
The degree of abstraction from hardware is much greater this time, meaning you don't have all or nothing transitions.
Yahoo! Yes, the abstraction is much greater, "transition pressure" is lower and that means transition will be slower, not faster.
With RAM being so cheap, most general purpose computing devices will quickly be 64-bit capable for practical reasons, and once the CPU is 64-bit you will have some applications that make use of that.
Yes, some applications. The kind of applications which need huge memory. Like Photoshop - which has 64-bit version now. For the whopping three days (CS4 was released October 15, 2008) - and only under Windows Vista (Windows XP and MacOS X are 32-bit only).
You no longer have "32-bit only" as an option, so if you want the minimum number of platforms to support, "64-bit only" becomes the sensible choice.
Why not? As I recall Photoshop is very resource-hungry application and MacOS is "platform of choice" for designers, yet Photoshop remains 32-bit application. It'll win from huge disk caches on 8GB-16GB systems, but it does not need to be 64-bit to get advantage from this memory... The pressure to make applications 64-bit, not an OS will only begin to mount 5-10 years down the road.
I think this creates a situation in which a "siphon effect" occurs, taking people from "Well, there is one 64-bit program I want to run" to a 64-bit only system in one upgrade cycle.
Sorry, but I can not see how it creates such illusions. We've switched from "32bit system with some 64bit programs" (this is what we used for last three years) to "64bit system with a lot of 32bit programs" on my work just few months ago (and there were voices to wait year or two) - and a lot of stuff is still 32bit-only with no plans to switch to 64bit any time soon. Sparc servers worked with such a model (64bit kernel, 32bit userspace) for years, why x86-64 must be different.
I would not have said that transition to 32-bit ended in 2001. I'd say that happened in the mid 1990s, there was a lot of mess inside Windows 95, but new development of 16-bit software had ceased, PC video games (which you might think of as 16-bit because they ran under DOS for a few years still after Win 95) were actually using DOS extenders to escape 16-bit DOS and run 32-bit code.
Windows95/98/ME had a lot of 16-bit code and programs for that OS often included 16bit helpers (not just installers). And games actually switched to pure 32bit only when 3D became widespread - that's few years after Quake. Till Windows 9X was alive it was to early to say "16bit era is history". Even if you count mid 1990s as the end of "16bit era" it's still 10 years after introduction of i386 architecture...
When the Pentium Pro flopped it was just barely mis-timed, its 16-bit performance was poor and the last few major applications with performance critical 16-bit code were just dying off. The Pentium II, also with fairly bad 16-bit performance, went into the market just fine.
Actually they fixed this performance loss and that saved Pentium II. Because a lot of programs depended on 16-bit - most of all Windows9X itself.
Of course very little of this strictly has to be in RAM at one moment.
This is not a problem: when you are talking about such a huge data sizes often it does not matter if it's addressable at once or not. You can always use mmap over /dev/zero to work with huge data regions like EMS back in a day. EMS worked just fine when random access is not needed (random access to video file is really random access to few thousand positions in that file).
You're right - in principle there are programs running on this laptop in front of me that could be 16-bit, or even 8-bit programs, They don't do very much, they don't manipulate huge files, or anything.
They are using GLibC - that's enough. Statically linked "Hello world" is over 128KiB today. More then 16-bit can handle without a lot of tricks. And when we are talking about code - that's different cattle of fish. EMS worked fine with a lot of data, but when program included 1-2MB of code... it was different game altogether. It was still possible but speed loss was 10-15%, sometimes more.
But in practice it's much easier to just have one platform, a platform that's big enough for all the programs you run, and I argue that today and certainly tomorrow that platform is 64-bit.
It's certainly easier if you start from scratch. But if you have 32bit legacy it's different.
Finally the pointers question. This is a design issue. There's a common Unix design where pointers are used as opaque handles, so an "image handle" or a "message handle" or whatever is just a pointer. Fast, but not necessarily space efficient. If your program typically has a few individual handles being passed around, that's a good pragmatic decision, but if you keep huge structures filled with handles then you may need to re-evaluate when porting to 64-bit, could the handles be indexes on an array or vector instead?
Ah. That's the answer I've waited for: so you suggest to switch from 32bit to 64bit, throw away EMS-style tricks used for video and add new, equally troublesome, tricks to work with data structures. Essentially zero gain for a lot of work (you still can not use a lot of data objects because your 32bit indexes will overflow). Is this wise decision? Not IMO.
That's what I've wanted to say: for a lot of programs switch to 64bit is just a replacement of one set of tricks with another. Today. When they'll start to work with billions of these handles (and that means many gibibytes of RAM, not 4GiB or 8GiB) it'll make sense to use 64bit despite speed loss (otherwise it'll just not work) and probably switch should happen somewhat earlier. But to do it today... it's too early.
Programs that absolutely /must have/ large numbers of real pointers or other address-sized things are fairly rare.
The only place where 64-bit switch happened fast is FOSS because a lot of people just took it as a challenge: let's make everything pure 64-bit. Where people are counting money - process is much, much slower. Unless you are talking about few guys which needed and used gibibytes of RAM for many years (like EDA).
to post comments)