Too little too late
I've been testing DataCore and NetApp and quite a few others for storage. The first and most important thing I learned is that the best way to decrease storage costs is to just stop using SCSI which means there's a real need to get away from VMware (as I struggle to get a VMware data center up right now).
SCSI IS NOT a network protocol and it's really really really bad at it as well. There are 9-12 full protocol decodes/encodes and translations between a IO request from a virtual machine and the storage which adds a ridiculous amount of latency. Also there is an insane amount of overhead in block based SCSI for handling small reads/writes which are a fact of life since developers in general tend to use language provided streaming classes/functions for file I/O.
So, that brings us to NFS and SMB. NFS is okish... it's a protocol which really has far too much legacy and way too much gunk in it as it tries to be everything to everyone. At the same time, all these years later, there's still no standard for handling operations like VAAI NAS as part of NFS which is just plain silly since NFS is an RPC based protocol and things like remote procedure calling should be first nature to the protocol. As a result, using NFS is just out of the question for daily virtualization using VMware since those guys make it impossible for anyone other than companies willing to spend $5000 and sign contracts to get a hold of their API for VAAI NAS which is just stupid. As a result, for VAAI NAS with Linux storage servers, I had to install the Oracle VAAI NAS driver and override their certificates and decode their REST API and implement it on Node.JS to make it tolerable.
Then there's SMB v3 which is a near re-write from the ground up for just virtualization storage. To use it, you need Hyper-V which won't have nested hypervisor support until the next release which is something I'm personally extremely dependent on.
So, performance-wise... DataCore is SCSI and their management system has all kinds of odd bugs and quirks and is damn near impossible to implement properly in an application-centric data center. There just really isn't much value in their products other than acting as a FC boot server for blades which don't like iSCSI (think HP, Cisco, etc...)
NetApp has terrible performance. Because of the overwhelming sheer stupidity of using block protocols, the NetApp has no idea what it's filing and dies a slow painful death in hash calculation hell. Heaven forbid you have two blocks which are accessed often which have a hash collision. It'll suck up nearly the entire system. Let's talk controllers and ports. NetApp controllers and ports cost so much there's no point even talking about them. Then there's the half based API for PowerShell, barely functional REST API, disasterous support for System Center and/or vCloud Orchestrator. Add the OnTap web gui which is so bad there's no point in even trying to run it... which generally you can't because their installer can't even setup the proper icon and it's generally blocked by the JVM anyway.
I have a nifty saying about NetApp and DataCore... if I wrote code that bad, I'd be unemployed.
These days, there are a lot of options for storage... too bad most of it's not that good. I'm moving almost entirely to Windows Storage Server with SoFS and Replica because I'm able to get a fairly stable 2-3 MIOPS per storage cluster and I've been building that on $500 used servers with an additional 8 ten-gig ports and consumer SSD drives and NAS drives.