|
|
Subscribe / Log in / New account

CXL 1: Management and tiering

CXL 1: Management and tiering

Posted May 15, 2022 21:58 UTC (Sun) by willy (subscriber, #9762)
In reply to: CXL 1: Management and tiering by Paf
Parent article: CXL 1: Management and tiering

Oh yes, the CXL boosters have a future where everything becomes magically cheap. I don't believe that world will come to pass. I think the future of HPC will remain as one-two socket CPU boxes with one-two GPUs, much more closely connected over CXL, but the scale-out interconnect isn't going away, and I doubt the scale-out interconnect will be CXL. Maybe it will; I've been wrong before.

I have no faith in disaggregated systems. You want your memory close to your [CG]PU. If you upgrade your [CG]PU, you normally want to upgrade your interconnect and memory at the same time. The only way this makes sense is if we've actually hit a limit in bandwidth and latency, and that doesn't seem to have happened yet, despite the sluggish adoption of PCIe4.

The people who claim "oh you want a big pool of memory on the fabric behind a switch connected to lots of front end systems" have simply not thought about the reality of firmware updates on the switch or the memory controller. How do you schedule downtime for the N customers using the [CG]PUs? Tuesday Mornings 7-9am Are Network Maintenance Windows just aren't a thing any more.


to post comments


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds