Under estimating the impact of data management
Testing for QD=1 is like shooting in a range, from the looks it does not seem that, they are looking at up to 20 clients going to the array. There is a big difference shooting in a range vs shooting in a real combat situation. What kills predictability of application performance is data management needs and locality. As an example, when you take a backup in a DAS environment, you pretty much kill the application. It becomes even more pronounced when the data volume increases, lets not even get to all the scans that many enterprises run day-to-day. (Let us not go into the debate as to who needs backup since there are copies.) With respect to test and dev, what if I want a copy of the production "data" for testing, can we do a writeable snapshot and mount it on a test dev cluster and go on or do I have to copy the data out. The key to bringing a shared accelerated storage is all about minimizing data movement and data movement happens all the time and every data movement job creates a choke on the application performance.