It's called Ethernet...
The "missing" interconnect is Gigabit Ethernet, with 10 Gbit Ethernet either available now or "real soon now" for GX. There is an internal Infiniband interconnect between HA node pairs, however this is not used for the cluster traffic. The main GX interconnect (and now for "GX Mode" in DoT 8) among node pairs has always been Ethernet. The disks / shelves themselves are FCAL (looped 4 GB Fibre Channel), the shelves are FC cabled to 2 HA heads each for redundancy. For the Ethernet interconnects, slot in Cisco switches, or your own preferred / supported switch vendor. NetApp resells them as well I believe.
I would love to see an Infiniband interconnect option, as this interconnect is currently used on 2-node DoT 7G & now "7 mode" type HA clusters, as well as the HA pairs in GX (though solely for replaying NVRAM contents & taking over shelf ownership during node-down failovers), but so far this is not available... it may only be a matter of time, as well. The latency would definitely improve with IB, even over 10GbE, however the PCI-E bus on the controller heads may still be a bottleneck. Right now you have to use an IB-Ethernet gateway, if your compute cluster is IB interconnected, which IMHO is a total kludge. As soon as IB native node interconnect and IB native connect to client is available, I think GX mode will really begin to shine.
Maybe that's why no new SPEC SFS2008 numbers are published yet for DoT 8 in GX mode.... though the old GX system still broke 1 million IOPS (on the older SFSv3) with 24 nodes on GbE; I'd expect much better cluster results to come out "real soon now" unless there are some serious unforeseen issues with the performance of the new DoT 8 GX mode.