Solutions desperately seeking problems
Sub millisecond latencies from a SAN are easily and commercially available, and have been for close to a decade. There are corner case applications that benefit from lower latencies, and that’s where the in memory solutions come in. Latency is solved for our generation, until the rest of the application stack catches up...CPU speeds are no higher now than they were 10years ago, so trimming a few microseconds off of data access will make precisely phuckol difference to most applications and users.
The marketecture fluff of claiming a processor embedded into a disk controller to solve complex computational problems like facial recognition is smoke and mirrors - such an app would also need access to other network assets such as databases, so you just moved a tiny bit of latency/bandwidth around to another part of the stack, at the expense of “Reinventing the wheel” and raising the acquisition cost of the endpoint devices.
Data Gravity went bust with a similar approach. It was a great solution but it didn’t really solve any problems.
Latency isn’t a thing at the moment.