|
|
Log in / Subscribe / Register

Nice to see an update

Nice to see an update

Posted Apr 7, 2026 1:52 UTC (Tue) by Cyberax (✭ supporter ✭, #52523)
In reply to: Nice to see an update by intelfx
Parent article: No kidding: Gentoo GNU/Hurd

> So, just like any *target* that would be hanging off the NVMe end of the IC?

It's an _additional_ computer.

> How come it's feasible to buy O(N) NVMe SSDs, but not feasible to buy O(N) Thunderbolt target ICs?

You need more than a Thunderbolt IC. You also need a full USB-C PHY with all its complexity. It ends up being expensive. There are also no standard multiplexing protocols for it.

As a result, high-end HDDs are now going to SAS (Serial-Attached SCSI) that scales to 24Gbps and supports multiple devices on a single link. But it's a separate standard with physically incompatible connectors, so it's almost entirely absent on consumer hardware.


to post comments

Nice to see an update

Posted Apr 7, 2026 2:30 UTC (Tue) by intelfx (subscriber, #130118) [Link] (9 responses)

> You need more than a Thunderbolt IC. You also need a full USB-C PHY with all its complexity. It ends up being expensive. There are also no standard multiplexing protocols for it.

Look, the point was not that rack-scale systems should use Thunderbolt as it is. Thunderbolt is obviously a consumer technology that solves consumer use-cases.

The point was that it does not take some sort of obscenely profligate technology to route PCIe over a non-fragile cable with a reasonable connector and multiplexing.

Nice to see an update

Posted Apr 7, 2026 2:32 UTC (Tue) by intelfx (subscriber, #130118) [Link] (8 responses)

> There are also no standard multiplexing protocols for it.

Also, a "multiplexing protocol" is literally built into PCIe. The entity implementing it is called a PCIe switch.

Nice to see an update

Posted Apr 7, 2026 3:39 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (7 responses)

> Also, a "multiplexing protocol" is literally built into PCIe. The entity implementing it is called a PCIe switch.

Sigh. Have you _tried_ them? PCIe requires very tight bus arbitration rules and very fast signaling.

So you can easily get devices that simply split one x16 link into, say, four x4 links. But if you want a _true_ switch, you are in the realm of "Call us for pricing". Even the very simplest devices that do true switching cost more than a $1000: https://www.amazon.com/HighPoint-Technologies-Rocket-8-M-...

So yep, NVMe sucks. It's a bad standard for _drives_.

Nice to see an update

Posted Apr 7, 2026 9:13 UTC (Tue) by farnz (subscriber, #17727) [Link] (5 responses)

Highpoint are not the cheapest make, by a long shot. For example, this is a simple device based around the PEX88024 PCIe switch chip (which is a true switch, not a splitter), and that costs $200.

Nice to see an update

Posted Apr 7, 2026 10:26 UTC (Tue) by pizza (subscriber, #46) [Link] (4 responses)

> Highpoint are not the cheapest make, by a long shot. For example, this is a simple device based around the PEX88024 PCIe switch chip (which is a true switch, not a splitter), and that costs $200.

While that is certainly cheaper, you still managed to prove Cyberax's point though.

(A SAS controller+cables are not only cheaper, but SAS drives are also far cheaper on a $/TB basis. And won't wear out on you like flash will..)

Nice to see an update

Posted Apr 7, 2026 10:59 UTC (Tue) by intelfx (subscriber, #130118) [Link] (3 responses)

> (<...> SAS drives are also far cheaper on a $/TB basis. And won't wear out on you like flash will..)

That's entirely unrelated to the choice of bus/interface.

Sigh. If you argue, please argue *correctly*.

Nice to see an update

Posted Apr 7, 2026 11:06 UTC (Tue) by pizza (subscriber, #46) [Link] (2 responses)

> That's entirely unrelated to the choice of bus/interface.

Convenient, you cut out the directly-related first half of my reply.

> Sigh. If you argue, please argue *correctly*.

...Please follow your own advice before casting shade at others.

Meanwhile, are you seriously saying that "total solution cost" isn't a factor in bus/interface selection?

Nice to see an update

Posted Apr 7, 2026 11:35 UTC (Tue) by intelfx (subscriber, #130118) [Link] (1 responses)

> Convenient, you cut out the directly-related first half of my reply.

Because it's not the one I want to respond to. That point is discussed in the other subthread. And yes, it is somewhat more expensive, by virtue of NVMe being a more capable interface. That's basic economy.

> Please follow your own advice before casting shade at others.

I'm not casting shade, I'm directly saying that (a part of) your argument is invalid.

> Meanwhile, are you seriously saying that "total solution cost" isn't a factor in bus/interface selection?

You argued for SAS against NVMe on the basis that "SAS drives" (which you implied to be HDDs) are cheaper and supposedly more reliable than "NVMe drives" (which you implied to be flash-based). Both implications are invalid to make in an argument about *bus interfaces*.

Your argument is akin to saying that because a Ferrari is more expensive than a tricycle (while you are content with the latter), three wheels are better than four.

Nice to see an update

Posted Apr 7, 2026 11:55 UTC (Tue) by Wol (subscriber, #4433) [Link]

> You argued for SAS against NVMe on the basis that "SAS drives" (which you implied to be HDDs) are cheaper and supposedly more reliable than "NVMe drives" (which you implied to be flash-based). Both implications are invalid to make in an argument about *bus interfaces*.

Not only invalid in a an argument over buses, but many many years ago, when somebody did a shoot-out over flash drives, I was left with the very strong impressions that flash drives outperformed rotating rust for reliability and longevity ... (okay, not the cheap bargain basement stuff). The only advantage rust has over flash, once you are paying for decent quality, is rust is a lot cheaper.

Cheers,
Wol

Nice to see an update

Posted Apr 7, 2026 10:57 UTC (Tue) by intelfx (subscriber, #130118) [Link]

> Sigh. Have you _tried_ them?

I did, in fact. (Well, not me specifically, but I was in charge of writing a firmware for a device with one and did see the sausage being made.) It's not that costly. And you don't need one per drive.

> So you can easily get devices that simply split one x16 link into, say, four x4 links

Yeah, that's not a switch, that's a splitter.

> So yep, NVMe sucks. It's a bad standard for _drives_.

You still haven't managed to illustrate your point in any larger capacity than "a strong opinion".


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds