|
|
Subscribe / Log in / New account

Notes from the Intelpocalypse

Notes from the Intelpocalypse

Posted Jan 4, 2018 16:21 UTC (Thu) by ortalo (guest, #4654)
In reply to: Notes from the Intelpocalypse by JFlorian
Parent article: Notes from the Intelpocalypse

I am not sure this will disrupt much in the near term. So many users are so used to seeing their computing devices security compromised that they do not even complain anymore.
Plus the fact that hardware vendors do not seem to me as the biggest culprits here. Why was speculative execution introduced in the first place? Because software programmers did not want to use complex compilation techniques (involving, e.g. data profiling or variants programming) and the software industry did not want to pay for advanced devleopment tools (and possibly new languages, new compilers, etc.). Hardware-level runtime optimization was seen as good enough. Well, it may be, but maybe you lose more in the process than you think (especially predictability).

"Good enough" has always been a very problematic strategy for security-critical computing. But why would such a state of fact be perturbed? (I hope it will be, but I do not see why unless all users finally start to see some kind of new light and put actual money on it - or more probably out of not good enough systems and teams.)


to post comments

Notes from the Intelpocalypse

Posted Jan 4, 2018 16:30 UTC (Thu) by epa (subscriber, #39769) [Link]

Intel did have a good try at removing speculative execution from the processor: with Itanium, as I understand it, any speculative load instructions have to be put in by the compiler explicitly. As you say, it didn't take off because nobody could be bothered to switch to the advanced compiler technology needed. (Well, there may be other reasons why Itanic sank, and in some ways we are better off not having an Intel-proprietary instruction set, but certainly industry inertia was part of it.)

Notes from the Intelpocalypse

Posted Jan 4, 2018 17:23 UTC (Thu) by pizza (subscriber, #46) [Link] (5 responses)

> Because software programmers did not want to use complex compilation techniques (involving, e.g. data profiling or variants programming) and the software industry did not want to pay for advanced devleopment tools (and possibly new languages, new compilers, etc.).

That's disingenuous.

The simple fact of the matter that those "complex compilation techniques" didn't exist at the time, and still don't exist today. And it's not for the lack of trying. Intel spent many billions of dollars trying to make this work, and made some progress -- but it turned out that the hardware could do a better job at runtime than the compilers could at compile-time -- because, as it turns out, at runtime one has the advantage of knowing the *data* the software is dealing with, while at compile-time one doesn't.

Notes from the Intelpocalypse

Posted Jan 4, 2018 22:57 UTC (Thu) by roc (subscriber, #30627) [Link] (4 responses)

This is completely right.

Also, Itanium would not have been immune to Spectre. Itanium included speculative load operations, and in the "Spectre variant 1" attack, the compiler might well have hoisted the problematic loads above the bounds check precisely to get the performance benefit that an out-of-order CPU gets by speculatively executing those loads.

Notes from the Intelpocalypse

Posted Jan 5, 2018 6:59 UTC (Fri) by epa (subscriber, #39769) [Link] (3 responses)

Right - but on Itanium it would be more straightforward to fix, since you could set a compiler flag to just remove speculative load instructions from the kernel (as a quick fix), adding them back where they are proven safe. Indeed, the compiler could be taught not to speculatively lift loads outside bounds checks.

In user space, I imagine that the explicit speculative load instruction used on Itanium does do all the same memory access checking as an ordinary non-speculative load, so it can't be used to snoop in the same way as the hidden speculative execution on x86_64.

Notes from the Intelpocalypse

Posted Jan 5, 2018 10:12 UTC (Fri) by ortalo (guest, #4654) [Link]

Well, maybe I am a somehow disingenuous, admittedly back then the hardware-based solutions looked better, but I have to question everything, including the fact that the most prominent hardware vendor of that time really did try to favor software development tools rather than its own silicon-oriented intellectual property, don't you think?
Anyway, I would love to be proven wrong and see some of this past research resurrect into a nice powerfull-enough deterministic processor and the associated innovative software development environment for current and near-future critical systems. In my opinion, it is the right time now and many would certainly consider helping it (in good faith I assure you ;-).

Notes from the Intelpocalypse

Posted Jan 5, 2018 11:16 UTC (Fri) by roc (subscriber, #30627) [Link] (1 responses)

Your second paragraph seems to be talking about Meltdown, but Spectre 1 is still a problem for user-space applications. It is probable that Meltdown wouldn't have worked on Itanium.

FWIW in C I don't think it's easy to tell what is a bounds check and which loads are guarded by which checks.

I agree that it would be a bit easier to fix these specific issues in Itanium. I don't think that makes this a "Itanium should have won!" moment.

Notes from the Intelpocalypse

Posted Jan 7, 2018 16:02 UTC (Sun) by mtaht (subscriber, #11087) [Link]

The discussions over at comp.arch have been quite informative,(https://groups.google.com/forum/#!forum/comp.arch)

And it does look like the mill was invulnerable by design to spectre/meltdown. They did find and fix a bug where the compiler could lift a memory access ahead of its guard, but near as I can tell that would have caused a segfault rather than a permissions violation.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds