IB is faster and lower latency, but is significantly more expensive, has shorter cable length restrictions, (IIRC) many more wires in the cables (which make them more expensive and more fragile)
IB was designed for a system interconnect within a rack (or a couple of nearby racks)
ATA and SATA aren't general interconnects, they are drive interfaces
10 GbE is a fair comparison for IB, but it was designed to allow longer cable runs, with fewer wires in the cable (being fairly compatible with existing cat5 type cabling)
Posted Aug 8, 2009 7:24 UTC (Sat) by abacus (subscriber, #49001)
[Link]
What you wrote above about cabling is correct but completely irrelevant in this discussion. What I proposed is to use the IB API's (RDMA) and software stack (IPoIB, SDP, iSER, SRP, ...) for communication between a virtual machine and the host system. In such a setup no physical cables are necessary. An additional kernel driver will be necessary in the virtual machine however that implements the RDMA API and allows communication between guest and host.
Virtualization and InfiniBand
Posted Aug 8, 2009 9:27 UTC (Sat) by dlang (✭ supporter ✭, #313)
[Link]
if you are talking a virtual interface, why would you use either?
define a driver that does page allocation tricks to move data between the client and the host for zero-copy communication. at that point you beat anything that's designed for a real network.
then you can pick what driver to run on top of this interface, SCSI, IP, custom depending on what you are trying to talk to on the other side.
Virtualization and InfiniBand
Posted Aug 8, 2009 10:52 UTC (Sat) by abacus (subscriber, #49001)
[Link]
As I wrote above, implementing an IB driver would allow to reuse a whole software stack (called OFED) and the implementation of several communication protocols. Yes it is possible to develop all this from scratch, but that is more or less like reinventing the wheel.
Virtualization and InfiniBand
Posted Aug 13, 2009 4:54 UTC (Thu) by jgg (guest, #55211)
[Link]
10GBASE-T is not compatible with Cat5e, it needs Cat6 cabling. It is also still a pipe dream, who knows what process node will be necessary to get acceptable cost+power. All 10GIGE stuff currently is deployed with CX-4 (Identical to SDR IB) or XFP/SFP+ (still surprisingly expensive).
The big 10GIGE vendors are desperately pushing the insane FCoE stuff to try and get everyone to re-buy all their FC switches and HBAs since 10GIGe has otherwise been a flop. IB is 4x as fast, and 1/4th the cost of 10GIGE stuff from CISCO :|