Ugh!
Let's all say this together
Fibrechannel doesn't scale!
MDS is an amazing product and I have used them many times in the past. But let's be honest, it doesn't scale. All flash systems from NetApp for example have a maximum of 64 FC ports per HA pair (which is so antiquated it's not worth picking on here) and that means that the total system bandwidth of the system is about 8Tb/sec. Of course, if you consider that HA pairs suggest you have to design for a total system failure of a single controller which cuts that in half. Then consider that half that bandwidth is upstream, the other half down. Meaning, half is for connecting drives to the system, the other half is for delivering bandwidth to the servers. So we're down to 16 reliable links per cluster. There has to be synchronization between the two controllers in an HA pair. So let's cut that I. Half of we don't want contention related to array coherency.
An NVMe drive consumes about 20Gb/sec bandwidth. So, that's a maximum capacity of 25 online drives in the array. Of course there can be many more drives, but you will never reach the bandwidth of more than 25 drives. Using Scale-Out, it is possible scale wider, but FC doesn't do scale out and MPIO will crash and burn if you try. iSCSI can though.
Now consider general performance. FC Controller are REALLY expensive. Dual ported SAS drives are ridiculously expensive. To scale out performance in a cluster of HA pairs would require millions in controllers and drives. And then because of how limited you are for controllers (whether cost or hard limitations) the processing requires for SAN operations would be insane. See, the best controllers from the best companies are still limited by processing for operations like hashing, deduplication, compression, etc... let's assume you're using a single state of the art FPGA from Intel or Xilinx. The internal memory performance and/or crossbar performance will bottleneck the system further and using multiple chips will actually slow it down since it would consume all the SerDes controllers just for chip interconnect at a speed 1/50th (or worse) than the internal macro ring bus interconnects. If you do this in software instead, even the fastest CPUs couldn't hold a candle to the performance needed for processing a terabit of block data per second. Just the block lookup database alone would kill Intel's best modern CPUs.
FC is wonderful and it's easy. Using tools like the Cisco MDS even makes it a true pleasure to work with. But as soon as you need performance, FC is a dog with fleas.
Does it really matter? Yes. When you can buy a 44 real core, 88 vCPU blade with 1TB of RAM on weekly deals from server vendors, a rack with 16 blades will devastate any SAN and SAN Fabric making the blades completely wasted investments. Blades need local storage with 48-128 internal PCIe lanes dedicated to storage to be cost effective today. That means the average blade should have a minimum of 6xM.2 PCIe NVMe internally. (NVMe IS NOT A NETWORK!!!!!!) then for mass storage, additional SATA SSDs internally makes sense. A blade should have AT LEAST 320Gb/sec storage and RDMA bandwidth and 960Gb/sec is more reasonable. As for mass storage, using an old crappy SAN is perfectly ok for cold storage.
Almost all poor data center performance today is because of SAN. 32Gb FC will drag these problems out for 5 more years. Even with vHBAs offloading VM storage, the cost of FC computationally is absolutely stupid expensive.
Let's add one final point which is that FC and SAN are the definition of stupid regarding container storage.
FC had its day and I loved it. Hell I made a fortune off of it. I dumped it because it is just a really really bad idea in 2017.
If you absolutely must have SAN consider using iSCSI instead. It is theoretically far more scalable than FC because iSCSI uses TCP with sequence counters instead of "reliable paths" to deliver packets. By doing iSCSI over Multicast (which works shockingly well) real scale out can be achieved. Add storage replication over RDMA and you'll really rock it!