
400G
> ConnectX-7 400G
A 400Gbit NIC. One single port. (Only one port is available, or 2x 200Gbit.)
Remember SCSI? It used to be the epitome of peripheral connectivity? (not management, not ease of use, but connectivity?) SCSI grew up into SAS (serial attached SCSI). We're at SAS-3 now, a 12Gbit, 4-channel per port (up to 16 channels per card) connectivity standard. That gives it 192Gbit total per card, or 48Gbit per port. SAS-4 is scheduled to be 22.5Gbit per channel, but hasn't even been released yet. More than SAS-4 is necessary to get full bandwidth from a tray of NVMe disks. (a tray of nvme: imagine 48 disks throwing data at 4GB/s each, 32Gbit * 48 == 1.5Tbit/s.) OTOH, more than one SAS HBA would be required to connect it to the host - you'd over saturate a PCIe x16 v5 connection (~50GB/s) trying to do so. (You'd nearly saturate four of them.)
Networking now beats locally-attached storage, with rather thick cablesthat give you up to 2m reach. Wow. I'm kind of surprised that disk shelves don't use ethernet(-like) interfaces for connectivity -- it's smaller, simpler, and potentially faster. Maybe SAS is lower latency, or more redundant.
Crazy. The world is really starting to go big-iron again. Mainframes will make a return because individual, disparate servers just can't keep up.
One fun thought: you can kind of do whatever with SAS: you set up a target and a host, as actual computers with HBA cards (not just external devices) and you can really run a network between them. It's not for the faint of heart, but it can be done - so you could get minimal latency, high-throughput network connectivity from one host to another via a SAS port, say 48Gbit, today, for the cost of a couple cards on eBay and some time (lots..) setting it up. TBH I thought that's how infiniband et al. got their networking done - over something like SAS, but it seems to be another protocol.