Moving past TCP in the data center, part 1
Moving past TCP in the data center, part 1
Posted Nov 2, 2022 23:55 UTC (Wed) by Cyberax (✭ supporter ✭, #52523)In reply to: Moving past TCP in the data center, part 1 by paulj
Parent article: Moving past TCP in the data center, part 1
The service for this protocol was used to move already encrypted data, so additional layers of encryption were unnecessary for us. But a more general protocol should definitely have support for it.
> Encryption at least prevents those with network super-user roles from snooping on server traffic, and those with super-user access on some server-roles from being able to snoop on traffic for other server-roles
Snooping at datacenter-scale is surprisingly not at all useful. You can observe only a small amount of traffic if you're trying to do it by physically splicing into cables, and if you can do it on the machines that run the network code, it's probably game over already.
Posted Nov 3, 2022 9:59 UTC (Thu)
by paulj (subscriber, #341)
[Link] (4 responses)
If an attacker has access to machines, without role-authentication and encryption an attacker can potentially use that to widen the number of services and access-roles the attacker can manipulate. (And authentication without a MAC is effectively the same - and if you're going to MAC the traffic, you can as easily encrypt it).
Bear in mind, in the large tech companies some of the server machines basically have a (host controlled) switch built-in to them, to allow multiple hosts in a brick/sled to share the same PHY to the network. Also, the switches are basically (specialised) servers too. So an attacked could insert code in the switching agents to programme the switching hardware to selectively pick out certain flows, for later analysis to use for privilege widening/escalation - quite efficient.
Posted Nov 3, 2022 17:42 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
Not quite true for very high-traffic services.
> Bear in mind, in the large tech companies some of the server machines basically have a (host controlled) switch built-in to them, to allow multiple hosts in a brick/sled to share the same PHY to the network.
It's typically more complicated. A host can contain one or more PHYs and they typically go to the TOR (top-of-the-rack) device that does further routing/NAT/whatever.
Posted Nov 4, 2022 10:28 UTC (Fri)
by paulj (subscriber, #341)
[Link] (2 responses)
There is one more PHY, to the BMC, which controls the chassis and can give serial access to the hosts (and some very slow, hacky IP/TCP to the host, IIRC).
There are some large enterprise "blade" type systems which also use onboard switching ASICS I think, but the ones I know of use more traditional switching ASICs, (e.g. HPE were using Intel RRC10k) with actual NIC blocks for each host included into the ASIC. So these actually look and work a lot more like a traditional network, with proper buffering and flow-control between the hosts and the switching logic and the upstream (the shared NICs above did not do this properly - least in earlier iterations - causing significant issues).
Posted Nov 4, 2022 10:32 UTC (Fri)
by paulj (subscriber, #341)
[Link]
Posted Nov 23, 2022 22:25 UTC (Wed)
by Rudd-O (guest, #61155)
[Link]
Cyberax's views would fit right into, let's say, a Google in the year 2011. In the year 2015 — after Snowden — it was already consensus that everything needed to be encrypted, no exceptions.
And so all traffic inside Google — anything at all using Stubby — is, in fact, encrypted. As you said, if you're gonna MAC the traffic, might as well crypt it too. All those primitives are accelerated in the CPU, and some in software now.
Moving past TCP in the data center, part 1
Moving past TCP in the data center, part 1
Moving past TCP in the data center, part 1
Moving past TCP in the data center, part 1
Moving past TCP in the data center, part 1
