Re: The business case is pretty good.
1) Writes in controllers are normally mirrored over PCIe - does that sound quicker than Ethernet? Do you know what the fabric latency is in a FC normal environment (hint, it's not a lot)?
Just because something sounds quicker, doesn't mean it is quicker, which reminded me of this quote:
"We all know that light travels faster than sound. That's why certain people appear bright until you hear them speak."
2) cost analysis - for sure - if you have a tiny number of hosts, shared storage is hard to justify. "Hellish" suggests you've been reading too many VSAN marketing slides where they push the BS line that SSDs in storage arrays cost 22.48 times as much as SSD in x86 servers. If you had to use a number for that illustration, 1 times as much is a lot closer to the truth.....
3) If you put all the storage in the box with CPU - cool idea. How do I make sure that I don't lose transactions or cope with server downtime/workload migration? Sounds very enterprise. Fancy a job at TSB doing their infrastructure?
4) See 4:
Any latency sensitive application that has fast storage would benefit. Transit latency didn't matter when it was 40 microseconds out of 5000 microseconds (aka 5 milliseconds) - it was a tiny rounding error.
When the storage response time is 150 microseconds, perhaps you should start paying attention as it's a decent percentage of overall response time.
5) Cost for existing FC users to try FC-NVMe is pretty near zero in a lot of cases with extra performance being the reward. What's the lock-in again?
Let's look at the quote from the article again:
"FC-NVMe is ideal for existing enterprises that already employ Fibre Channel, where NVMe-oF over RDMA Ethernet is best suited for green-field "Linux" environments."
What's your problem with the above and do you have a view on how come Greg hasn't been found out earlier in his 40 yr career?