LWN: Comments on "The push to save Itanium" https://lwn.net/Articles/950466/ This is a special feed containing comments posted to the individual LWN article titled "The push to save Itanium". en-us Sat, 20 Sep 2025 14:11:40 +0000 Sat, 20 Sep 2025 14:11:40 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/953477/ https://lwn.net/Articles/953477/ farnz <p>There's a lot about AMD64 that is done purely to reuse existing x86 decoders, rather than because they're trying to have backwards compatibility with x86 at the assembly level. They don't have backwards compatibility at the machine code level between AMD64 and x86, and they could have re-encoded AMD64 in a new format, while having the same instructions as they chose to implement. <p>That's what I mean by "not having the money"; if they wanted assembly-level backwards compatibility, but weren't short on cash to implement the new design, they could have changed instruction encodings so that (e.g.) we didn't retain special encodings for "move to/from AL" (which exist for ease of porting to the 8086 from the 8085). Instead AMD reused the existing x86 encoding, with some small tweaks. Fri, 01 Dec 2023 13:17:06 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/953474/ https://lwn.net/Articles/953474/ sammythesnake <div class="FormattedComment"> <span class="QuotedText">&gt; Panic-implement x86, but 64-bit. This is basically what AMD did for AMD64, because they needed a 64-bit CPU, but didn't have the money to do a "clean-sheet" redesign</span><br> <p> Although I'm sure the cost of a from-scratch design would have been prohibitive in itself for AMD, I think the decision probably had at least as much to do with a very pragmatic desire for backward compatibility with the already near-universal x86 ISA.<br> <p> History seems to suggest that (through wisdom or luck) that was the right call, even with technical debt going back to the 4004 ISA which is now over half a century old(!) (<a rel="nofollow" href="https://en.m.wikipedia.org/wiki/Intel_4004">https://en.m.wikipedia.org/wiki/Intel_4004</a>)<br> </div> Fri, 01 Dec 2023 12:20:43 +0000 The push to save Itanium https://lwn.net/Articles/953305/ https://lwn.net/Articles/953305/ andrey.turkin <div class="FormattedComment"> I'm sure they do this because of the deep-rooted nostalgia, as those were their own PCs of their childhoods. Just like Z80/8080 people, 6502 people, or DOS people. There are also people (like Usagi Electric at youtube) who puddle in murky waters of 60s and 70s computers because of (I assume) a fascination with digital archaeology if not outright Frankenstein-like delight ("Its alive" wow factor).<br> Itanium doesn't check any of the boxes. It is not old enough and it is not home-friendly enough to have gathered a significant fan base. It was something you'd encounter at your workplace, which is not exactly nostalgia-inducing. Clinging to it seems to be as silly as clinging to DEC Alpha box or Sun Ultra 10 station - sure, it was great once... 20 years ago. Now it is long obsolete. It is time to let it die peacefully.<br> </div> Thu, 30 Nov 2023 04:30:49 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/952141/ https://lwn.net/Articles/952141/ JohnDallman <div class="FormattedComment"> I spent 1999-2003 porting a mathematical modeller to Itanium on Windows and HP-UX. I was one of the first to ship commercial software on Itanium, but it never made any money. The only positives from the work were educational. I had a lot of contact with Intel, and an excellent engineering contact, and got some insight into their thinking. <br> <p> They never had a real plan for how to make compilers discover the parallelization opportunities that they wanted to exist in single-threaded code. The intention was to have a horde of developers, and discover lots of heuristics that would add up to that. Essentially, a fantasy, but it meant the compiler team got to add more people and its managers got career progression. This had started early in the project, when the compiler people claimed "we can handle that" for many of the difficulties in hardware design.<br> </div> Tue, 21 Nov 2023 20:25:36 +0000 The push to save Itanium https://lwn.net/Articles/952143/ https://lwn.net/Articles/952143/ JohnDallman <div class="FormattedComment"> NetBSD have never actually made an Itanium release. FreeBSD and OpenBSD don't seem to touch it. <br> </div> Tue, 21 Nov 2023 20:25:13 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951828/ https://lwn.net/Articles/951828/ lproven <div class="FormattedComment"> Oh well done! <br> <p> I used to write for the Inq myself back then, too. Never got to meet Charlie, though.<br> </div> Fri, 17 Nov 2023 17:56:50 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951775/ https://lwn.net/Articles/951775/ james <blockquote> This was reported in various forum comments at the time and I can't give any citations, I'm afraid. </blockquote> <a href="http://web.archive.org/web/20031008045240/http://www.theinquirer.net/?article=11781">I can</a>. <p> Obviously, neither Microsoft nor Intel have publicly confirmed this, so a quote from Charlie is as good as you're going to get. <p> (And I can quite see Microsoft's point: the last thing they wanted was FUD between three different 64-bit instruction sets, with no guarantee as to which was going to win, one of them requiring users to purchase new versions of commercial software to get any performance, and the prospect that you'd then have to buy new versions <em>again</em> if you guessed wrong.<p> It would have been enough to drive anyone to Open Source.) Fri, 17 Nov 2023 16:24:09 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951823/ https://lwn.net/Articles/951823/ paulj <div class="FormattedComment"> If hardware can apply an optimisation, a software JIT should be able to as well. The question must then be:<br> <p> - Can the transistors (and pipeline depth) saved in hardware then be used to gain performance?<br> <p> Transmeta did well on power, but was not fast. Which suggests the primary benefit is an increase in performance/watt from the transistor savings - not outright performance. Either that, or Transmeta simply didn't have the resources (design time, expertise) to use the extra transistor budget / simpler pipeline to improve outright performance. <br> </div> Fri, 17 Nov 2023 15:44:17 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951818/ https://lwn.net/Articles/951818/ foom <div class="FormattedComment"> When I first read about itanium, I thought that surely Intel must be targeting it to JIT-compiled languages, and just didn't care that much about ahead-of-time compiled languages. (Java is the future, right?)<br> <p> Because, in the context of a JIT, many of the impossible static scheduling problems of the in order EPIC architecture seem to go away.<br> <p> You can have the CPU _measure_ the latencies, stalls, dependencies, etc, and then have software just reorder the function so the next executions are more optimally arranged to take advantage of that observed behavior. You should be able to make more complex and flexible decisions in your software JIT than e.g. a hardware branch predictor can do, while still being able to react to changing dynamic execution conditions unlike AOT compiled code.<br> <p> But, I guess Transmeta thought so too; their architecture was to have a software JIT compiler translating from X86 to a private internal EPIC ISA. And that didn't really work either...<br> </div> Fri, 17 Nov 2023 14:57:30 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951772/ https://lwn.net/Articles/951772/ lproven <div class="FormattedComment"> <span class="QuotedText">&gt; One could even argue that without Itanium Intel would have introduced something x86-64-like sooner.</span><br> <p> That's a good point and it's almost certainly true.<br> <p> <span class="QuotedText">&gt; Of course a butterfly scenario is what if in this case Intel would have refused to license x86-64 to AMD?</span><br> <p> There is an interesting flipside to this. <br> <p> There *were* 2 competing x86-64 implementations: when Intel saw how successful AMD's was becoming, it invented its own, _poste haste,_ and presented it secretly to various industry partners. <br> <p> Microsoft told it no, reportedly with a comment to the effect of "we are already supporting *one* dead-end 64-bit architecture of yours, and we are absolutely *not* going to support two of them. Yours offers no additional improvements, and AMD64 is clearly winning, and so you must be compatible with the new standard." <br> <p> (This was reported in various forum comments at the time and I can't give any citations, I'm afraid.)<br> <p> For clarity, the one dead-end arch I refer to is of course Itanium.<br> <p> Intel was extremely unhappy about this and indeed furious but it had contractual obligations with HP and others concerning Itanium so it could not refuse. Allegedly it approached a delighted AMD and licensed its implementation, and issued a very quiet public announcement about it with some bafflegab about existing mutual agreements -- as AMD was already an x86 licensee, had been making x86 chips for some 20 years already, and had extended this as recently as the '386 ISA. Which Intel was *also* very unhappy about, but some US governmental and military deals insisted that there were second sources for x86-32 chips, so it had to.<br> <br> My personal and entirely unsubstantiated notion is that UEFI was Intel's revenge on the x86-64 market for being forced to climb down on this. We'd all have been much better off with OpenFirmware (as used in the OLPC XO-1) or even LinuxBios.<br> </div> Fri, 17 Nov 2023 11:10:35 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951729/ https://lwn.net/Articles/951729/ farnz <p>I disagree, in large part because the benefits of EPIC were being touted by comparison of hand-crafted EPIC examples versus compiler output; in other words, Intel could reasonably (by 1995) have had an EPIC compiler, and be showing that it was a long way short of the needed quality to meet hand-crafted examples. I'd also note that the techniques that would be needed to make EPIC compilers meet the hand-crafted examples go well beyond today's state of the art. <p>And underlying this is the degree to which EPIC was focused on impractical "if only software would do better" situations, not on the real world. Thu, 16 Nov 2023 18:40:08 +0000 The push to save Itanium https://lwn.net/Articles/951739/ https://lwn.net/Articles/951739/ professor <div class="FormattedComment"> OpenVMS on HP blade servers with Itanium is the last thing i was doing in this area.<br> <p> Now they are porting OpenVMS to x86.<br> <p> <p> </div> Thu, 16 Nov 2023 18:05:41 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951719/ https://lwn.net/Articles/951719/ anton <ol> <li> I don't think that compiler technology that benefitted IA-64 would have benefitted OoO IA-32 implementations much, for the following reasons: Techniques like basic block instruction scheduling, superblock scheduling, and modulo scheduling (software pipelining) were already known at the time of the Pentium, and the in-order Pentium would have benefitted more from it than the Pentium Pro. However, these techniques tend to increase the register pressure (IA-64 has 128 integer registers for a reason), and IA-32 has only 8 registers, so applying such techniques could have easily resulted in a slowdown. <p>IA-64 also has architectural features for speculation and for dealing with aliases that IA-32 does not have and so an IA-32 compiler cannot not use. But given the lack of registers, that's moot. <li>Every CPU profits from more cache for applications that need it (e.g., see the Ryzen 5800X3D), and given main memory latencies of 100 cycles and more, the benefit of OoO execution for that is limited (at that time, but also now). Itanium (II) got large caches, because a) that's what HP's customer's were used to and b) because even outside HP the initial target market was high end stuff. Also, given all the promises about superior performance, cache is an easy way to get it on applications that benefit from cache (and if you skimp on it, it's an easy way to make your architecture look bad in certain benchmarks). So Itanium II (not sure about Itanium) could score at least a few benchmark wins, and its advocates could say: "See, that's what we promised. And when compilers improve, we will also win the rest." </ol> Thu, 16 Nov 2023 16:56:30 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951703/ https://lwn.net/Articles/951703/ farnz <p>A couple of things: <ol> <li>Intel's failure of imagination with compiler technology was a failure to observe that compiler scheduling goes hand-in-glove with hardware scheduling; the original comparison between what a 1994 compiler could generate for the Pentium Pro versus hand-optimized code for a hypothetical EPIC machine should have been redone as the compiler for the EPIC machine improved. Had they done this, they'd have noticed that the compiler improvements needed for EPIC also benefited the Pentium Pro/II/III, and would have been a lot less bullish on IA-64. <li>The implementations of IA-64 needed a lot of cache, but not low-latency cache, in order to perform adequately; the ISA made it possible to not really care much about latency of instruction fetch (at least in theory), but did require a decent throughput. So, where IA-32 had 1 MiB of L3 cache, Merced had 4 MiB of L3 in the same timescale, and this increased requirement for total cache stayed throughout IA-64's lifetime. </ol> <p>I stuck to what Intel should have known in 1995 for two reasons: first because this was before the hype train for IA-64 got properly into motion (and as you've noted, once the hype train got moving, Intel couldn't easily back out of IA-64). Second is that by my read of things like <a href="https://www.sigmicro.org/media/oralhistories/colwell.pdf">this oral history about Bob Colwell's time at Intel</a>, Intel insiders with reason to be heard (had worked on Multiflow, for a start) had already started sounding the alarm about Itanium promises by 1995, and so I don't think it impossible that an alternate history would have had Intel reassessing the chances of success at this point in time. Thu, 16 Nov 2023 15:56:55 +0000 EPIC failure https://lwn.net/Articles/951702/ https://lwn.net/Articles/951702/ paulj <div class="FormattedComment"> Ah, I was referring to the software fixes for the affected hardware. <br> </div> Thu, 16 Nov 2023 14:04:25 +0000 EPIC failure https://lwn.net/Articles/951657/ https://lwn.net/Articles/951657/ anton Another approach (taken by the "invisible speculation" work) is to never let any misspeculated work influence anything that is measurable in the same thread or others. In that case you do not need to identify any boundaries. Wherever they are, the secrets are not revealed through side effects of speculation. Thu, 16 Nov 2023 13:10:40 +0000 EPIC failure https://lwn.net/Articles/951642/ https://lwn.net/Articles/951642/ anton What performance cost? Skylake, Kaby Lake, and Coffee Lake are affected, Whiskey Lake, Comet Lake, and Amber Lake with are not. They are all based on the Skylake microarchitecture and have the same IPC, and they have a <a href="https://en.wikipedia.org/wiki/Comet_Lake">higher turbo clock rate</a>. Apparently Intel's engineers managed to add the necessay multiplexer for suppressing the invalid speculative result (or however they did it) without increasing the cycle time. Thu, 16 Nov 2023 13:06:13 +0000 EPIC failure https://lwn.net/Articles/951624/ https://lwn.net/Articles/951624/ farnz <p>Ultimately, all the variants of Spectre boil down to "there is a security boundary here, and the CPU ignores that security boundary while executing speculatively". Sometimes that security boundary is something the CPU has no excuse for not knowing about (kernel mode versus user mode, for example); other times, it's something like the boundary between a JavaScript interpreter and its JITted code, where the boundary exists only as a software construct. <p>And the thing EPIC demonstrated very clearly (it has both compiler-based speculation and compiler-based parallelism) is that any compiler rearrangement of code to support explicit speculation and parallelism also benefits implicit speculation and parallelism done by an OoOE CPU. Thus, what we really want for both high performance and high security is for the processor to be told where there's a security boundary, so that it can restrict its speculation where there's a possibility of a timing-based leak across the boundary. <p>Further, EPIC is not invulnerable to speculation-based attacks; nothing prevents a compiler from issuing a speculative or advanced load, then calling a function, then checking the result of the load later, other than the likelihood that speculation will not have succeeded due to the function call. Thu, 16 Nov 2023 12:15:26 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951614/ https://lwn.net/Articles/951614/ anton The promise of EPIC was also that the hardware would be simpler, faster and allow wider issue because: <ul> <li> The hardware would not have to check for dependences between registers in the same instruction group, while an ordinary n-wide superscalar RISC (even if in-order) has to check whether any of the n next instructions accesses a register that an earlier of those instructions writes. The argument was that this requires quadratic effort and does not scale. <li> The hardware would not have to deal with scheduling and could therefore be built to run at higher clockspeeds. In reality IA-64 implementations were always at a clock speed disadvantage compared to out-of-order AMD64 implementations. And looking at other instances of in-order vs. OoO, OoO usually was competetive or even had the upper hand in clock speed. We saw this from the start with the Pentium at 133MHz and the Pentium Pro at 200MHz in 1995. <li> Compilers have a better understanding of which instructions are on the critical path and therefore need to be reordered, whereas hardware scheduling just executes ready instructions. In practice compiler knowledge is limited by compilation unit boundaries and stuff like indirect calls, cache misses, and, most importantly, by the much lower accuracy of static branch prediction compared to dynamic (hardware) branch prediction. Admittedly in 1994 the hardware branch predictors had not advanced as far, so one still might think at the time that the other expected advantages would compensate for that. <p>Some people think that the Achilles heel of EPIC is cache misses, but the fact that IA-64 implementations prefered smaller (i.e., less predictable) L1 caches with lower latency over bigger, more predictable L1 caches shows that the compilers have more problems dealing with the latency than with the unpredictability. </ul> These promises were so seductive that not just Intel and HP, but also Transmeta's investors burned a lot of money on them, and even in recent years I have seen people advocating EPIC-like ideas. <p>One aspect that is often overlooked in these discussions is the advent of SIMD instructions in general-purpose computers in the mid-1990s. The one thing were IA-64 shone was dealing with large arrays of regular data, but SIMD also works well for that, at lower hardware cost. So mainstream computing squeezed EPIC from both sides: SIMD ate its lunch on throughput computing, while OoO outperformed it in latency computing. <p>As for coping with reality, the Pentium 4 was released in November 2000 with a 128-instruction reorder window and SSE2. This project and its parameters had been known inside Intel several years in advance, while Merced (the first Itanium) was released in May 2001 (with 800MHz while the Pentium 4 was available with 1700MHz at the time). <p>But of course, by that time, Intel had made promises about IA-64 for several years, and a lot of other companies had invested in Intel's roadmaps, so I guess that Intel could not just say "Sorry, we were wrong", but they had to continue on the death march. The relatively low hardware development activities after the McKinley (2002) indicates that Intel had mostly given up by that time (but then, how do we explain Poulson?). Thu, 16 Nov 2023 11:13:54 +0000 EPIC failure https://lwn.net/Articles/951623/ https://lwn.net/Articles/951623/ paulj <div class="FormattedComment"> Quickly fixed, but at a substantial performance cost. <br> </div> Thu, 16 Nov 2023 10:55:55 +0000 EPIC failure https://lwn.net/Articles/951612/ https://lwn.net/Articles/951612/ anton You get Spectre with compiler-based speculation just as you do it with hardware-based speculation. And there are ways (called "invisible speculation" in the literature) to fix Spectre for hardware speculation that do not cost much performance; for compiler-based speculation I think this would need architectural extensions for identifying what instructions are part of what speculation. <p>Meltdown would indeed not have happened with compiler-based speculation: The hardware does not know that the instruction is speculative, so it would observe permissions unconditionally. But, e.g., AMD's microarchitectures never had Meltdown, and it was relatively quickly fixed in Intel's microarchitectures, so I don't see that as an advantage of compiler-based speculation. Thu, 16 Nov 2023 09:30:30 +0000 The push to save Itanium https://lwn.net/Articles/951608/ https://lwn.net/Articles/951608/ anton We have all kinds of outdated hardware (including an IA-64 box) in order to do comparisons between hardware across the ages. In my case I compare them using software written in C that does not use modern glibc features, so I don't need a modern userland and thus not a modern kernel (so our IA-64 actually has some ancient system), but others with a similar motivation may need an up-to-date userland. <p>I don't know if that is the motivation of those who want to keep IA-64 in the kernel and glibc, though. Thu, 16 Nov 2023 08:58:18 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951501/ https://lwn.net/Articles/951501/ james It largely goes back to the early IBM PC days, when both IBM and AMD acquired second-source licenses so they could make chips up to (and including) the 286 using Intel's designs, including patent cross-licenses.<p> They weren't the only ones. <p> When Intel and HP got together to create Merced (the original Itanium), they put the intellectual property into a company they both owned, but which didn't have any cross-license agreements in place, which is why AMD wouldn't have been able to make Itanium-compatible processors except on Intel's (and HP's) terms. Wed, 15 Nov 2023 14:22:06 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951481/ https://lwn.net/Articles/951481/ Wol <div class="FormattedComment"> Wich is why I mentioned the 686. I don't remember the details, but there was some sort of deal (with IBM?) and Cyrix which meant the 686 was legally licenced, and I thought AMD had inherited that. Either way, I'm sure AMD had some sort of grandfather licence deal.<br> <p> Cheers,<br> Wol<br> </div> Wed, 15 Nov 2023 13:27:21 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951279/ https://lwn.net/Articles/951279/ anselm <blockquote><em>IIRC Intel was making various threats towards AMD wrt licensing various aspects of the x86 ISA. AMD was definitely in a kind of legal underdog situation. Inventing x86-64 put AMD in a much stronger position and forced Intel into a cross licensing arrangement, guaranteeing a long lasting patent peace. Which was good for customers.</em></blockquote> <p> There would have had to be some sort of arrangement in any case, because large customers (think, e.g., US government) tend to insist on having two different suppliers for important stuff. </p> Tue, 14 Nov 2023 10:12:51 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951209/ https://lwn.net/Articles/951209/ farnz <p>They'd have gone in one of two directions: <ol> <li>Panic-implement x86, but 64-bit. This is basically what AMD did for AMD64, because they needed a 64-bit CPU, but didn't have the money to do a "clean-sheet" redesign; Intel could have done similar. <li>A "new" ISA, based on IA-64 but built around OoOE instead of explicit compiler scheduling. </ol> <p>It'd be interesting to see what could have been if 1995 Intel had redesigned IA-64 around OoOE instead of EPIC; they'd still want compiler assistance in this case, because the goal of the ISA changes from "the compiler schedules everything, and we have a software-visible ALAT and speculation" to "the compiler stops us from being trapped when we're out of non-speculative work to do". Mon, 13 Nov 2023 12:41:44 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951199/ https://lwn.net/Articles/951199/ marcH <div class="FormattedComment"> <span class="QuotedText">&gt; That Itanium delivered the coupe de grace...</span><br> <p> Coup: blow, strike, punch, kick, etc. Silent "p".<br> Coupe: cut (haircut, card deck, cross-section, clothes,...). Not silent "p" due to the following vowel.<br> <p> So, some "coups de grâce" may have indeed involved some sort of... cut. Just letting you know about that involuntary, R-rated image of yours :-)<br> <p> According to wiktionary, the two words have only one letter difference but totally different origin.<br> <p> For completeness:<br> Grâce: mercy (killing) or thanks (before a meal or in "thanks to you")<br> Dropping the ^ accent on the â doesn't... hurt much.<br> <p> </div> Mon, 13 Nov 2023 01:01:59 +0000 The push to save Itanium https://lwn.net/Articles/951182/ https://lwn.net/Articles/951182/ pm215 <div class="FormattedComment"> The trouble with wanting a modern userspace is that that userspace tends to be written to assume a certain level of performance from its CPU and a certain amount of RAM it can use; as time goes on userspace gets gradually more CPU hungry and more RAM hungry. Linux is better than most for this but not immune, and at some point even if you can technically run a modern distro on a bit of retrocomputing hardware you won't be happy with the performance compared to continuing to use that five year old as-originally-shipped version.<br> <p> </div> Sun, 12 Nov 2023 09:35:10 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951180/ https://lwn.net/Articles/951180/ ianmcc <div class="FormattedComment"> If Intel had introduced x86-64 rather than AMD, would Intel have screwed it up?<br> </div> Sun, 12 Nov 2023 09:02:04 +0000 EPIC failure https://lwn.net/Articles/951178/ https://lwn.net/Articles/951178/ epa <div class="FormattedComment"> With what we have learned from Spectre and Meltdown, weren’t there at least some advantages in having superscalar and speculative execution under control of the compiler?<br> </div> Sun, 12 Nov 2023 07:37:47 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951176/ https://lwn.net/Articles/951176/ joib <div class="FormattedComment"> <span class="QuotedText">&gt; A butterfly scenario? Don't you mean an alternate reality?</span><br> <p> It was a reference to the "butterfly effect", <br> <p> <a href="https://en.m.wikipedia.org/wiki/Butterfly_effect">https://en.m.wikipedia.org/wiki/Butterfly_effect</a><br> <p> , meaning that seemingly minor details can result in major unforeseen consequences.<br> <p> (which is one reason why "alternate history" is seldom a usable tool for serious historical research)<br> <p> <span class="QuotedText">&gt; In THIS reality, what would have happened if AMD had refused to licence x86-64 to Intel?</span><br> <p> IIRC Intel was making various threats towards AMD wrt licensing various aspects of the x86 ISA. AMD was definitely in a kind of legal underdog situation. Inventing x86-64 put AMD in a much stronger position and forced Intel into a cross licensing arrangement, guaranteeing a long lasting patent peace. Which was good for customers. <br> </div> Sun, 12 Nov 2023 07:30:15 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951171/ https://lwn.net/Articles/951171/ Wol <div class="FormattedComment"> <span class="QuotedText">&gt; Of course a butterfly scenario is what if in this case Intel would have refused to license x86-64 to AMD?</span><br> <p> A butterfly scenario? Don't you mean an alternate reality?<br> <p> In THIS reality, what would have happened if AMD had refused to licence x86-64 to Intel?<br> <p> In reality, I think that that couldn't happen - I don't know the details of the intricate licencing deals (which I believe goes back to the Cyrix 686 - yes that long ago), but I think there are licence sharing deals in place that meant Intel could use x86-64 without having to negotiate.<br> <p> Cheers,<br> Wol<br> </div> Sat, 11 Nov 2023 22:48:09 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951163/ https://lwn.net/Articles/951163/ joib <div class="FormattedComment"> All of them, Itanium included, faded away into near irrelevance. So whether Intel created Itanium or not, the industry would ultimately have consolidated around x86-64/windows/Linux.<br> <p> Unclear whether Intel profited more from Itanium compared to the alternative scenario where they would have introduced x86-64 earlier. <br> </div> Sat, 11 Nov 2023 17:01:51 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951150/ https://lwn.net/Articles/951150/ pizza <div class="FormattedComment"> <span class="QuotedText">&gt; Arguably industry consolidation was inevitable anyway</span><br> <p> You're still looking at this from a big picture/industry-wide perspective.<br> <p> The fact that Itanium was a technical failure doesn't mean it wasn't a massive strategic success for Intel. By getting the industry to consolidate around an *Intel* solution, they captured the mindshare, revenue, and economies of scale that would have otherwise gone elsewhere. <br> <p> </div> Sat, 11 Nov 2023 15:49:42 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951148/ https://lwn.net/Articles/951148/ joib <div class="FormattedComment"> Arguably industry consolidation was inevitable anyway due to exponentially increasing chip and process R&amp;D costs, and the clock was ticking for the Unix vendors with their high margin low volume businesses. That Itanium delivered the coupe de grace to several of them was inconsequential, the ultimate winners being x86(-64), Windows and Linux. <br> <p> One could even argue that without Itanium Intel would have introduced something x86-64-like sooner. Of course a butterfly scenario is what if in this case Intel would have refused to license x86-64 to AMD?<br> </div> Sat, 11 Nov 2023 15:21:40 +0000 The push to save Itanium https://lwn.net/Articles/951147/ https://lwn.net/Articles/951147/ joib <div class="FormattedComment"> Kittson was a new generation in name only, likely introduced only to fulfill some contractual obligations. Essentially it was a previous generation cpu, codename Poulson (2012), with a very modest clock speed bump. <br> </div> Sat, 11 Nov 2023 15:08:43 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951146/ https://lwn.net/Articles/951146/ pizza <div class="FormattedComment"> I'd argue that Itanium was actually an overwhelming success. <br> <p> It got multiple RISC server vendors to scrap their in-house designs and hitch themselves to Intel's offerings,<br> <p> <p> </div> Sat, 11 Nov 2023 14:27:39 +0000 The push to save Itanium https://lwn.net/Articles/951142/ https://lwn.net/Articles/951142/ ianmcc <p>From <a href="https://en.wikipedia.org/wiki/Itanium">https://en.wikipedia.org/wiki/Itanium</a>: </p> <blockquote> In February 2017, Intel released the final generation, Kittson, to test customers, and in May began shipping in volume.[7][8] It was used exclusively in mission-critical servers from HPE. In 2019, Intel announced that new orders for Itanium would be accepted until January 30, 2020, and shipments would cease by July 29, 2021.[1] This took place on schedule.[9] </blockquote> So they were still shipping in 2021, although new orders were not supposed to be happening. I'm surprised that Intel and HP are not prepared to keep support, considering it is only 2 years since they last shipped! This was a surprise to me; I had assumed they stopped development on Itanium around 2005! Sat, 11 Nov 2023 11:22:20 +0000 The push to save Itanium https://lwn.net/Articles/951127/ https://lwn.net/Articles/951127/ WolfWings <div class="FormattedComment"> ...and?<br> <p> At this point the upcoming Raspberry Pi 5 will outperform all but the final last-gasp 8-core-16-thread Itanium CPUs from what benchmarks I've been able to dig up.<br> <p> It's an INCREDIBLY dead platform because it's just so atrocious from the base fundamental design all the way up to the software (non-)support. It's as dead as the Bulldozer variants from AMD were compared to previous and later models, it just has no benefits at all versus many other options.<br> </div> Sat, 11 Nov 2023 01:12:10 +0000 EPIC failure (to cancel the project when it was first failing) https://lwn.net/Articles/951120/ https://lwn.net/Articles/951120/ CChittleborough <div class="FormattedComment"> This is an informative and insightful comment. Thank you.<br> </div> Fri, 10 Nov 2023 23:26:07 +0000