
Selling the Dream
"and add the value of data reduction at typical rates"
This, above all else, is why I hate Pure Storage.
Our Pure Storage range expansion story about the firm's FlashArray 400 series arrays prompted a missive from Pure saying that we'd got the price wrong. Which is strange given that we were quoting Pure's own figures. According to Pure Storage's marketing bods, low-end model pricing for the FA-405 and FA-450 arrays was $6 to $8/ …
This post has been deleted by its author
Dedupe makes $/GB hard to calculate, as 'typical' is specific to a client. Pure use 6:1 (see Data Sheet) after the 40% format/protection overhead so 11TB raw gives up to 40TB usable. Databases (which generally love flash) tend to dedupe less well than say 100 test VMs. If the data is encrypted or compressed at source you may see very little extra value, but performance slows as every IO becomes a flash hit rather than a cache hit in this scenario. So if you have been getting 6:1, and have 1TB of usable flash left (after protection and reserve), you would expect to be able to thin provision 6TB of effective storage, but only if you have the same workload mix. Bring in a whole new workload, and mileage will vary. You also need to know what $3-$4 gives in terms of service and support, its the total cost of ownership that matters for a given SLA, and a quantified business impact of moving to flash. All depends if you have 10TB of database, or 10TB of general purpose VMs.........there is no one-size fits all in storage....
So 4TB*60%*6:1=14.4TB (assuming default Pure dedupe ratio). Thats $8GB usable including support...so about the upper end of where where they say for smaller configs....but are you getting 6:1 ;). What workload have you put on to get that dedupe ratio? Interesting to get the context around the specific dedupe value.
This post has been deleted by its author
I'm considering VDI and VMware on an all-flash array, and looking at Pure closely. What happens when the workload changes? Example, I start with pooled desktops and then change to persistent images. Persistent users typically store more data in the conventional sense as if it were there own PC. How do I mitigate that from hitting the all flash array because user created data, ISOs, zip files, PDF etc don't benefit greatly from dedupe and maybe slight compression. Will this increase the overall $/GB as well in your opinion? I'm trying to find a AFA performance but I'm having second thoughts. Lastly, what is the overall incremental costs because that's what usually breaks the bank if the steep discounts are applied on acquisition costs. Fully disclosure, i'm looking at xtremio and pure.
Best $price/GB right now seems to be that Dell-Compellent hybrid, that uses two types of flash combined with the 4TB tier drives. You get the real performance (not just cached reads like some of the hybrids) without the cost. Recently when I did the numbers, it blew past what HP, EMC, Nimble, Pure and Tegile offered. Good install base, good roadmap of functionality supported by a global corporation. Just my 2 cents...
I'd seriously argue against those figures of $/GB vs the new breed of systems, would you be able to present them please, AC? Sounds like you may be a Dell rep in disguise.
To the OP - if you chose the right solution, you can get AFA performance out of the newer breed of storage vendors. For example we can deliver 60K IOPS (reads AND writes) out of 12x NL-SAS drives & 4x MLC SSDs at Nimble, for a lot less price. We also have stacks of references of people who have done exactly that. And also run SQL, Exchange, server virtualisation & file on the same drives without any problem.... with no troublesome tiering involved.
Disclosure - happy Nimble employee.
Following up on my original question, what's the whole IOPS story? If I have a plan to execute on 1000-1500 desktops, do I need 200K IOPS? my calculations are stating I need in the realm of 10-25 IOPS max. I'm reading more and more and the 200K AFA figures are more on the reads, and not on the writes. I thought VDI was mostly writes, at least north of 80%. Any VDI experts out there to comment?
FULL DISCLOSURE: I work for EMC XtremIO. Please visit "http://xtremio.com/white-papers-data-sheets" where we have posted a few large scale VDI reference architectures with XtremIO. As to your question about needing 200K IOPS and VDI being write heavy. Yes, VDI can indeed be write heavy, especially if you are using Citrix PVS which does lots of read caching in the PVS servers. But other times it is read heavy - for example during boot or logon storms. This is all covered in the reference architectures.
About IOPS - it really comes down to what you're trying to deliver. At XtremIO we fully believe that for VDI projects to be successful you need to deliver a "better than desktop" user experience. This means a virtual desktop that performs better than a modern SSD-equipped ultrabook like a MacBook Air. Users don't like getting a virtual desktop that mimics the behavior of a cheap desktop with a SATA drive in it. The great thing is that inline deduplication technology on XtremIO makes it not only feasible to deliver this level of performance, but to do it at a $/desktop that is very attractive - whether you plan to deploy linked clones or full clone persistent desktops.
These capabilities were not possible few years ago. Do your flash array homework wisely as not all flash arrays can deliver the requisite performance or economics - particularly for full clone desktops.