So why no enterprise arrays
One rather key issue is that you are talking about the 4x PCI standard, with the latest devices being able to fully load the 4 PCI lanes in test conditions. Such performance is able to overload 10Gbit ethernet links and saturate a 40Gbit link. This in many ways just makes them too fast for use in large arrays. Why bother if you can just deploy current SAS based options.
5 of them placed on a 16x PCI could make a nice RAID 5 solution, but will need a very high performance control chip to be developed, with performance specs well beyond anything currently available. For every 1GB/s of write performance the controlling chip will have to also read 1GB/s from the array, xor it and then write the resulting parity information as well as the original data. While such a configuration may provide very high speeds the enterprise market seems to be more focussed on DIMM based SSD solutions which link directly to the server's CPUs. These remove the controller chip issue and will distribute the storage across all the CPUs in a system if required.
The other great use for these types of drive would be as caches within arrays built with slower storage devices, but currently the write endurance is not designed for such tasks. The 960 PRO 2TB only has a 1.2PB endurance, which is not much in a enterprise write cache deployment.