The business case is pretty good.
The main benefits of moving to NVMe on the host (as opposed to just shoving NVMe drives into an array while still using SCSI protocols over FC or iSCSI) are
1. Lower latency
2. Lower CPU consumption on the host
3. No need to manage queue depths because they queues are effectively infinite
None of that will make much difference if you're using disk or are happy with 1-2 millisecond access times or only doing about 10,000 IOPS per host, but if you're doing some heavy duty random access like using your array to run training workloads for deep learning on a farm of NVIDIA-DGX boxes, then those things make a big difference.
Plus more performance, and lower overheads from a straightforward software upgrade (which is what moving from FC to NVMoFC should be) is a nice win.
I wrote some of this up in detail here https://www.linkedin.com/pulse/how-cool-nvme-part-4-cpu-software-efficiency-john-martin/