Improvement against //M but is it really revolutionary?
50% lower latency, 2x bandwidth, 4x performance density.
Let's look at it in reverse order. 4x performance density based on the fact that they need exhaust the performance of the controllers using 10 modules Vs 44 in case of //M. The reality is that the peak performance is still limited by the controllers. There is no change to that. The number of drives/modules would anyways vary from customer to customer based on their capacity needs. Going from 44 to 10 modules may be an improvement for //M customers but not necessarily against other flash vendors which do a lot better than the 44 flash modules even today.
2x bandwidth at 65% reads on a 32K block size.No real numbers shared except for relative comparison. This is great for a specific workload. It is also important for a scale-up array like Pure but this is not necessarily a challenge for others especially vendors that have a scale-out array where peak performance of a node doesn't matter. At the end of the day, what matters is the $/IOP.
50% lower latency.Again, no real numbers shared except for relative comparison. If I am able to get 0.8ms latency today on an all-flash system and I can get 0.4 ms on //X, is it low enough for me to pay the premium? Does my application need the extra 0.4ms?
Higher Usable with custom modules -
Pure with their //M has one of the worst usable space per drive. With custom modules they get to ~57% which is also lower than the typical >65% efficiency that most other all-flash systems can get.
Let's say Pure has incrementally better performance than anyone else out there (until others get to NVMe - not very long!). What do they bring to the table except performance, dedup/compression and proactive support through Pure1? The last I checked, that was table stakes.