Oh Rob....
Hi Rob! Thanks for your reply. It's great that the VP of Product Marketing has time to find this post and reply to it on a public forum...(!)
You're right! Customer value is more important. This is why Nimble has been so successful for the last 3 years of selling technology in the marketplace, and even placed on the Gartner Magic Quadrant for Storage this year in the “Visionary” category. I don’t recall seeing Tegile here (or anywhere in the Quadrant) at all. 1500 customers and 3000 deployments in under 3 years of selling is a huge achievement, so we must be doing something right.
Just want to pull you up on a few of your marketing claims if you don't mind...
1. Dedupe vs Compression
We all know the operational overheads deduplication of data requires vs using inline compression. Heck, a lot of our founding engineers (ex Netapp and Data Domain, no less) do. This is partly the reason we chose NOT to do it; it allows us to run more important tasks on the system like data garbage collection & backend performance and cache-hit optimisations, meaning we can fill an array up to 95% in capacity before any overhead in performance. ZFS, as you know, has huge problems in this area - performance tanks to the floor once past 60% capacity on the box (and starts at 30%). The age-old problem of using a hole-filling file system, eh?
However, the figures you quote seem wrong. Our customers (all 1500+ of them) see compression of 40-60%+ on their production environments, not 20-30%. And it seems your customer would agree - this one in particular see's far better compression than dedupe ratios on their system. PS - NICE GUI(!): http://www.iphouse.com/blog/mike/wp-content/uploads/2012/01/20120110-zebi-1-volume.png
2. Unified Access
Sure, having all the protocols under the sun is great. But if your delivery of said protocols sucks, then that makes you underachieve in everything, excellent at nothing. This is something heard a lot out in the field, where I am every day. Whereas we chose a protocol which we could optimise and build a solid foundation on and be the best in the field with that protocol. Which we are.
3. Active/Active Controllers
C'mon Rob... really? Any storage engineer/vendor worth their salt knows that running active/active controllers is a lot more complex to manage with protocol and volume distribution on the system (ever heard of LUN TRESSPASS?). It also means customers only ever run their controllers at 50% load to ensure if a controller failure occurs the system doesn't blow up when everything's running on one controller. It also means storage firmware updates are also FAR more complex, require lots of downtime and may even need engineers onsite to do it...
Whilst we run active/hot standby controllers in our system (yes, data is mirrored from controller to controller in real time in NVRAM), a Nimble firmware update will take 5-6 minutes in total and will cause 4 PACKETS of downtime in the whole process of the update. 4 PACKETS! that's insane. Also, Nimble can upgrade a storage array from 20K to 75K IOPS by live-upgrading controllers on the fly without adding any further disks or SSDs needed in the system. Can Tegile do that? Didn't think so.
4. ZFS
ZFS has HUGE problems, lots of people/customers in the industry know this. You're trying to say that using a legacy code base with a hole-filling filesystem which is maintained by an open-source community with a few India-contractor band-aids is better than engineering and architecting a filesystem from scratch with full in-house engineers and support?! And you're trying to sell these arrays to enterprise accounts?!
By the way, Nimble has been writing CASL (it's patented and proven file system) for over 5 years now with amazing success... so i'd guess you could say we've "got it right". Our large customer base (and world-class 24/7 support team) would.