What about non-reclaimable performance losses?
What about non-reclaimable performance losses?
Posted Mar 20, 2025 12:32 UTC (Thu) by Baughn (subscriber, #124425)In reply to: What about non-reclaimable performance losses? by excors
Parent article: Better CPU vulnerability mitigation configuration
Posted Mar 20, 2025 12:36 UTC (Thu)
by intelfx (subscriber, #130118)
[Link]
Posted Mar 20, 2025 13:51 UTC (Thu)
by excors (subscriber, #95769)
[Link]
This is the simplified version. It used to be that every web browser had its own unique approach, based on some combination of reverse-engineering other browsers, reverse-engineering web pages that depended on the behaviour of other browsers, and just making it up as they went along. Sometimes their behaviour would depend on TCP packet boundaries. Sometimes they'd crash. None of it was documented.
Now there are standards that document it all in great detail, very carefully designed and tested to avoid breaking compatibility with billions of old web pages, and browsers have converged on those standards, so there's only one kind of bonkers behaviour instead of many.
If you're writing web pages you can avoid a lot of the complexity and performance issues by avoiding document.write(), using <script defer>, etc. But browsers can't avoid it, because the quickest way to lose users is to be incompatible with one web page that is important them. Browsers, and CPU manufacturers, also need to compete on performance while supporting these features that were designed a decade before the first dual-core desktop CPU, so it's really hard to avoid being bottlenecked by single-thread CPU performance.
Posted Mar 20, 2025 14:05 UTC (Thu)
by farnz (subscriber, #17727)
[Link]
That's why the general technique browsers use to handle this is to speculatively assume that the bonkers thing doesn't actually happen, and start again but using the slow interleaved serial route if they observe the bonkers thing. This puts pressure on the wider ecosystem to allow you to run things in parallel, since while you will work with the bonkers thing (you have to!), performance is much better if you stick to sanity. And if the browser has good tools for making your sites perform better, those tools will clearly flag up that you've done something bonkers that forces the browser to abandon the fast path and restart on the slow path.
The net effect is that bonkers stuff still works (even if the original author is long gone), so you can still look at a monstrosity from 1997 in your current browser and have it work, but most sites will go towards sane over time because sane is faster.
Similar applies to CPUs in some senses, too - it is reasonable for a CPU to slow down if you do something that's technically allowed but difficult to implement in a modern design, but not reasonable to break backwards compatibility just because it's hard to implement in a high performance fashion. After all, if the code ran "fast enough" on an 80386 without cache at 16 MHz, then it'll run "fast enough" on a modern PC, too, even if it's forcing the CPU to behave like a 100 MHz CPU, not a 3 GHz CPU.
What about non-reclaimable performance losses?
What about non-reclaimable performance losses?
Because of backwards compatibility (you can't be sure that no web page anywhere does something bonkers), you have to be able to fall back to the interleaved serial execution model at any time.
Speculatively assuming sanity