Netapp scale out is not scale out
their "cluster" is little more than a hack it seems like. I drilled what I think was a NetApp employee on some of the finer points of their cluster (given I have not used it, and there seems to be a lot of hype about the release) and was kind of surprised and dissapointed by the results --
http://datacenterdude.com/netapp/netapp-dataontap-81-reponse/
I don't know about Isilon's performance, perhaps they are 'slow' in IOPS but it really seems the system is built for throughput rather than IOPS. I agree it doesn't seem like an ideal platform to run Vmware directly on top of. I assume (hope) that EMC didn't buy Isilon for that market segment. It makes a lot more sense to have Isilon as a scale out NAS where you put your data (directly accessing it via guest OS-based NFS) vs your VM images which you put on more traditional storage like a VNX or V-MAX or whatever. The amount of data for the images usually pales in comparison to the amount of data that the applications are using by orders of magnitude.
The impression I get, is that NetApp continues to bolt stuff onto their system which just makes it more complicated, instead of really addressing the core issues of scalability and even scale out. They probably just have too much invested in the current system to be able to really, truly fix it (much like Cisco). But it seems NetApp is still years away from having what most would consider a real cluster, if they ever get there.
Now how they are able to market the thing and get customers to buy into it is another matter. It wouldn't surprise me if they can sell a few more systems with this, but from a purely technical point of view, as a cluster - NetApp isn't there yet.
Look no further than the lack of ability to stripe a volume across more than a single controller node, even in cluster mode, I mean come on. Take NetApp's 24-node SpecSFS results that they released around the time 8.1 came out. Your basically having to MANUALLY manage 48 different storage systems (because a volume can live on only one controller). If you have a perfectly optimized workload like SpecSFS you can distribute your data over everything, but if your a more traditional user I can imagine it keeping the administrator up at night (unless they are massively over provisioned) because the system can't even automatically move a volume to another controller in the event I/O goes up. And even if it did (Compellent has this ability) - there is quite a large overhead in moving TBs of data around between systems. Instead of having it balanced from the get go like a real cluster should be, and being able to move more finer units of storage around - e.g. sub LUN auto tiering between arrays.
And as you might expect, that data management stuff you speak of (of which I bet part is deduplication) doesn't apply cross cluster nodes either, so are you now going to try to optimize de-duplication by trying to move volumes with like data on subsets of your cluster? Manually?
For all of the hype this release seems to have, if I was a customer I would feel let down - this is all they get after so many years of trying to integrate that Spinmaker(?) stuff ?
Also, if Xtreme IO is built from the ground up to be flash based, I don't see why it would compete with a hybrid NetApp PAM/HDD system. Unless EMC wants to try to integrate the XtremeIO into their existing line up, vs keeping it as a stand alone product(would that take them many years to do like it took NetApp to do with Spinmaker?). Perhaps the acquisition was to fend off the likes of Violin (maybe Xtreme IO came at a much better price)? I don't know. I'm not too familiar with that market space.
I just really don't see from a technical stand point how an Xtreme IO vs NetApp stacks up, they appear to be two completely different approaches to solving different problems. Though that won't stop sales people from using even more force to fit square pegs in round holes.