Respectfully, disagree.
Disclosure, EMCer here.
Chris - you probably would expect this from me, but I disagree. Let me make my argument, and lets see what people think. I ask for some patience from the reader, and an open mind. I'm verbose, and like to explore ideas completely - so this won't be short, but just because something isn't trite doesn't make it less accurate.
Read on and consider!
The choice of "multiple architectures to reflect workload diversity" vs. "try to serve as many workloads as you can with one core architecture" is playing out in the market. Ultimately, while we all have views - the customers/marketplace decides what is the right trade off.
a) EMC is clearly in one camp.
We have a platform which is designed to "serve many workloads well - but none with the pure awesomeness of a platform designed for specific purpose". That's a VNX. VNX and NetApp compete in this space furiously.
BUT we came to the conclusion a long time ago that if you tried to make VNX fit the space that VMAX serves (maniacal focus on reliability, performance, availability DURING failure events) you end up with a bad VMAX. Likewise, if we tried to have VNX fit the space Isilon fits (petabyte-level scale out NAS which is growing like wildfire in genomics, media, web 2.0 and more) - you end up with a bad Isilon. Why? Because AT THE CORE, you would still have a clustered head. Because AT THE CORE, file/data objects would be behind ONE head, on ONE volume. Because, AT THE CORE, you would still have RAID constructs. Are those intrinsically bad? Nope - but when a customer wants scale-out NAS, that's why Isilon wins almost overwhelmingly over NetApp cluster mode - when THOSE ARE THE REQUIREMENTS.
b) NetApp (a respected competitor, with a strong architecture, happy customers and partners) seems to me to be in the other camp. They are trying to stretch their single product architecture as far as it can go.
They finally seem to be "over the hump" of core spinnaker integration with ONTAP 8.2. Their approach of federating a namespace over a series of clustered FAS platforms has some arguments to be sure. The code-path means that their ability to serve a transactional IO in a clustered model is lower latency than Isilon (but not as fast as it was in simple scale-up or VNX, and certainly not the next-generation VNX). They can have multiple "heads" for a "scale out" block proposal to try to compete with HDS and VMAX. In my experience (again, MY EXPERIENCE, surely biased) - the gotchas are profound. Consider:
- With a Scale-Out NAS workload: Under the federation layer (vServers, "Infinite Volumes", there are still aggregates, flexvols, and a clustered architecture. This means that when a customer wants scale-out NAS, those constructs manifest - a file is ultimately behind one head. Performance is non-linear (if the IO follows the indirect path). Balancing capacity and performance by moving data and vServers around. Yup, NetApp in cluster mode will have lower latency than Isilon, but for that workload - that's not the primary design center. Simplicity and core scaling model are the core design center.
- Look at the high-end Reliability/Serviceability/Availability workload: In the end, for better or worse, NetApp cluster mode is not a symmetric model, with shared memory space across all nodes (the way all the platforms that compete in that space have been architected). That is at the core of why 3PAR, HDS, VMAX all have "linear performance during a broad set of failure behaviours". Yup, NetApp can have a device appear across different pairs of brains (i.e. across a cluster), but it's non-linear from port to port, and failure behavior is also non-linear. Is that OK? Perhaps, but that's a core design center for those use cases.
- And when it comes to the largest swath of the market: the "thing that does lots of things really well", I would argue that the rate of innovation in VNX has been faster over the last 3 years (due to focus, and not getting distracted by trying to be things it is not, and was never fundamentally designed to do). We have extended the places where we were ahead (FAST VP, FAST Cache, SMB 3.0, active/active behaviors, overall system envelope), we have filled places we were behind (snapshot behaviors, thin device performance, block level dedupe, NAS failover, virtualized NAS servers - VDM in EMC speak, Multistore/vServers in NetApp-speak), and are accelerating where there are still places to run (the extreme low-end VNXe vs. FAS 2000, larger filesystem support)
Look - whether you agree with me or not as readers - it DOES come down to the market and customers. IDC is generally regarded as the trusted cross-vendor slice of the market - and the Q2 2013 results are in, and public, here: http://www.idc.com/getdoc.jsp?containerId=prUS24302513
Can a single architecture serve a broad set of use cases? Sure. That's the NetApp and EMC VNX sweet spot. NetApp has chosen to try to expand it differently than EMC. EMC's view is that you can only stretch a core architecture so far before you get into strange, strange places.
This fundamentally is reflected in NetApp's business strategy over the last few years. They themselves recognize that a single architecture cannot serve all use cases. Like EMC, they are trying to branch out organically and inorganically. That's why EMC and NetApp fought so furiously for Data Domain (the B2D and cold storage use case does best with that architecture). I suspect that's why NetApp acquired Engenio (to expand into the high-bandwidth, use cases - like behind HDFS, or some video editing that DDN, VNX, and others compete in). The acquisition of Bycast to push into the exa-scale object store space (which biases towards simple no-resiliency COTS hardware) is another example.
On the organic front, while I have ZERO insight into NetApp's R&D - I would suggest that their architectures to enter into the all-flash array space (FlashRay?) would really be best served with a "clean sheet of paper" approach of the startups (EMC XtremIO, Pure Storage, etc) rather than trying to jam that into the "single architecture" way. If they choose to stick with a single architecture for this new "built for purpose" space - well - we'll see - but I would expect a pretty mediocre solution relative to the competition.
Closing my argument....
It is accurate to say that EMC needs ViPR more than NetApp. Our portfolio is already more broad. Our revenue base, and more importantly customer base is broader.
NetApp and NetApp customers can also benefit now - and we appreciate their support in the ViPR development of their southbound integration into the ONTAP APIs (and I think their customers will appreciate it too). NetApp is already more than a single stack company. Should they continue to grow, and expand into other use cases - they will need to also continue to broaden their IP stacks.
Lastly - ViPR is less about EMC or NetApp - rather a recognition that customer need abstraction and decoupling of storage control plane and policy REGARDLESS of who they choose - and that many customers whose needs are greater than the sweet spots of "mixed workload" (VNX and NetApp) have diverse workloads, and diverse architectures supporting that (often multi-vendor).
This is why ViPR is adjacent to, not competitive with SVC (array in front of array), NetApp vSeries (array in front of array), HDS (array in front of array), and EMC VPLEX and VMAX FTS (array in front of array). These are all valid - but very different - traditional storage virtualization where they: a) turn the disk from the old thing into just raw storage (and you format it before using); b) re-present it out for use. All these end up changing (for worse or for better) the characteristics of the architecture in the back into the characteristics of the architecture in the front. ViPR DOES NOT DO THAT.
Remember - ultimately the market decides. I could be completely wrong, but hey - innovation and competition is good for all!
THANKS for investing the time to read and consider my argument!