Problem with SAN in general
I was recently told by a colleague of mine his company was about to upgrade firmware on their SAN controllers due to performance problems on a nearly exabyte SAN. I asked "Do you have a mirror?" And he said they have backup but not a mirror. I asked how long it would take to restore the backup and the number was nearly a month. I asked whether they have fully verified the contents of their backup and he said not recently because it would take a month just to stream the data from the backup.
The problem with SAN is that it centralized all problems. It's a single point of failure. The performance of even the fastest NVMe SANs are very very slow compared to distributed file systems.
They managed to do the upgrade it will now take about 6 weeks to run the rebuild on the array. The rebuild is destructive and they will have no idea whether the problem is fixed until it is done. They also don't know what caveats will be introduced from the upgraded firmware.
I don't experience these problems because I run two distributed file systems. One for performance and one for transaction oriented journaling. I have about 1Tb/s bandwidth between the two systems which can easily be saturated during transfer operations. What'a best is that my system cost less than a 10th of what his system cost per byte and instead of adding new disk shelves, I add disk, bandwidth and performance for each expansion. Instead of replacing SANs, I simply remove obsolete nodes and add newer and more efficient ones.
Trick one: Don't use VMware. Linux based GlusterFS systems only work with iSCSI or fiber channel which is slow and doesn't scale. VAAI NAS isn't available in Linux because of VMware's stupid policy of locking out open source developer.
Trick two: If you absolutely must run VMware, use Oracle Solaris for storage. Unlike EMC, NetApp, 3Par, etc... it can actually do proper scalability for performance and capacity. Consider Oracle Infiniband for the storage interconnect. Take classes on ZFS. Use Oracle servers. If you can afford $15,000 per blade for VMware, you can afford Oracle servers for storage. Oh... and don't use Infiniband for networking VMware or NSX. The CPU cost is too high.