Re: The business case is pretty good.
1) You don't get lower latency from HCI in my experience compared to SAN. In fact for writes it is often noticeably worse (writes are mirrored/replicated to at least 1 more node over an ethernet network in HCI).
2) Throw another host and licence it for various of bits of software (HCI/VMware/application) instead of reducing the CPU overhead by stopping pretending you're writing to individual spinning disks. Smart move.....
3) Latency can be induced by requests being serialised by limited queues. Applications see latency. We run applications for business benefit. There's the linkage you were looking for.
4) Any latency sensitive application that has fast storage would benefit. Transit latency didn't matter when it was 40 microseconds out of 5000 microseconds (aka 5 milliseconds) - it was a tiny rounding error.
When the storage response time is 150 microseconds, perhaps you should start paying attention as it's a decent percentage of overall response time.
5) Not everyone is moving away from FC. Moving to Infiniband means purchasing new switches and dealing with stringent distance requirements. Existing FC users starting to use FC-NVMe for latency sensitive workloads just needs to confirm the OS, HBAs and your storage can talk FC-NVMe. They have NO SWITCH HARDWARE COST if they have Gen 5 or Gen 6 switches (which is pretty much anything purchased in the past ~ 6 yrs).
End of lesson.