Wouldn't there be a fairly big contribution from Solidfire in the NetApp numbers?
172 posts • joined 12 Aug 2009
I think XIV hit some architectural limits but was an interesting bit of kit for a few years. It's hard to accept the CFO's statement that they are well positioned to take advantage of flash, when flash has been around for a couple of years now and they are still struggling. Take out the storage that IBM attached to z and p series, almost by default, and there is very little else going on!
Re: Meaningless Fiction by Joe Unsworth
The SKU purely for flash only exists due to Gartner's definition! And a lot of HPE's all flash revenue is still not being counted by Gartner, as many customers do want the flexibility to add some spinning drives. But even those customers are moving all flash now so expect the HPE numbers to keep ramping up significantly as they start to reflect the real amount of flash HPE is shifting!
What's your beef with ASICs anyway? Why would a customer even care?
That's really not a lot of comfort for customers. It just enables Pure to make some more aggressive claims knowing that they'll get the sale and can fix any mess afterwards. But for a customer that could mean going from 1 array to 2, hassle, inconvenience, embarrassment, etc. Much better to be more conservative / realistic with claims and anything better that is achieved is all goodness for the customer. The problem with Pure is that some of their arrays are quite small in terms of capacity so they need the big claimed saving to be competitive.
When talking about capacity, I wish all vendors would just talk good old fashioned usable capacity. As someone has said, your mileage will vary with dedupe and compression so capacity only really becomes 'effective' once you've done a poc or proper analysis. Until then it's just 'marketing' capacity which is all risk for the customer.
EMC were making a big deal last week about 3.84TB drives 'coming soon' ... HPE has had them for some time and has double capacity drives imminently. Given that the only remaining barrier to flash being used for just about all general purpose workloads is cost, then this is going to be of interest to many customers.
Re: Weak argument
EMC are still the overall primary storage market leader but the gap is shrinking rapidly (as recent market share reports have shown!). 'Inevitable shift to all flash' ... welcome to the future EMC! Only a year or 2 late! Right tool for the job? EMC / Dell have about 4 different tools for any job ... VMAX, VNX, XtremIO, and Compellent all potentially overlapping significantly. Some vendors can properly cover all 4 of those with a single product. A confused portfolio just creates decisions for customers that they shouldn't have to worry about!
This seems like a fairly pointless study to me. But assuming there is any relevance, isn't it a bigger issue for a vendor to be dropping significantly in terms of %, rather than dropping their position in a list. Look at HPE for example, barely drops in terms of % yet is highlighted because it slips down the list? EMC are still leading but have a much larger proportional drop. To me that's a bigger issue. Although will add that I think this whole study is a bit of a nonsense.
I hate it when vendors talk about 'effective capacity' as if it's real - "With the A270 offering up to 384TB effective capacity in 2U, HDS claims it is the industry’s densest all-flash array." The only real thing is usable capacity, everything else your mileage may vary! And base your sums on 5:1 and you're really rolling the dice!
That's a lot of different plays. The great thing about NetApp used to be their single minded dedication to ONTAP. It was a powerful message. Now they have 4 offerings to cover a market that others are covering with a single product. And why would anyone really need a hybrid play any more when AFAs are more cost effective. Hybrids are ceasing to make sense at a rate of knots.
Re: Entertaining discussion
Ah, so you're still in the outdated mindset that flash is just for stuff that needs super high performance so it's basically a drag car and you don't need bells and whistles. Wake up, flash is great for just about every workload these days and the market for general purpose arrays full of flash and benefiting from the performance is way bigger than the market for super fast storage without any real functionality! XtremeIO is lagging behind - Disruptive firmware, 2M IOPS if you take 8 bricks (that isn't even that special when 'disk arrays' full of flash can do 3.2M), poor scalability, has it even got replication yet (if it has, it's not mature). Those 747's with propellers can go just as fast as those jet ones, and carry a load more passengers these days. Stop throwing outdated FUD around and open your eyes.
Is it really news that EMC's portfolio is currently pretty weak? You shouldn't need (most likely flawed) surveys to tell you that. VNX is so dated and limited now, XtremeIO is a real slow learner and still way behind the other flash arrays. There isn't a lot to defend about EMC arrays right now. If EMC weren't so good at locking out other vendors, they would really be suffering right now.
"With a 6:1 data reduction ratio, the effective capacity of a fully packed cluster is 1.9PB." ... why not claim even higher. It's like going to the bookies and saying I'm going to bet on the higher odds horse so I can win more. There's a reason the odds are higher! Anyone using 6:1 to arrive at their effective capacity is heading for a fall!
From what I have heard about the way Autonomy recognized revenue, and forecasted, there were certainly some 'unusual' practices. And when I say heard, I mean been told first hand by current employees. I'm not sure what is legal and what isn't, or what HP should or shouldn't have been able to uncover prior to the acquisition but their practices were different to anything I'd seen before in a career of nearly a couple of decades.
Look, we're at the point where it's not a case of thinking 'why would I buy an AFA' but 'why wouldn't I'? If an AFA is cheaper per effective TB (based on a sensible low realistic de-dupe rate not some higher aspirational number) then why wouldn't you? Even if your apps don't need typical flash performance, if flash is cheaper then you'd buy it simply because it means you can put more on the array and not worry about it. And even apps that don't need performance as such aren't going to turn their nose up at it if it's available for less than the price of spinning disk. Forget flash being a luxury that few can afford, flash is cheaper than spinning disk for an awful lot of environments! It certainly is for the vendor I work for who has the best AFA on the market: http://searchsolidstatestorage.techtarget.com/feature/HP-3PAR-StoreServ-7450
We're getting beyond the point where customers are buying flash for pure performance these days so benchmarks and performance POCs are largely pointless. Flash is fast, we get it! But functionality whilst being bloody fast is the battleground now. It's very rarely a pure drag race these days, and for the small number of customers where absolute raw ridiculous performance is key, they already know where to look. There's not many workloads a few hundred 1000 IOPS at sub ms latency won't swallow up, but how easy is it to live with the array?
Re: Only EMC's View
What utter nonsense. Nobody buys an all flash array because it is an AFA, they buy it because it is bloody fast. HP 3PAR is bloody fast (and can be described as an AFA if you wanted to be pedantic). And the advantage is it has all the functionality that EMC seem to think no one can deliver!!
Any customer buying based on their claimed 6:1 is taking a massive gamble. If that's the average, across their entire fleet, then there is much more scope for high outliers, than low ones. And their arrays aren't very big in raw terms so buy an array needing 6:1 to get the numbers to work, only get a still fairly respectable 4:1 and then what? You don't have enough capacity and you're into a new box and going back cap in hand for more budget! Much better to buy based on a conservative dedupe ratio, and if you exceed it then it's free capacity!
The IBMer getting very defensive there! 23% margin in the sectors that IBM remains in is nothing to write home about! Wasn't their justification for selling off all the bits that have gone that they wanted to focus on higher margin areas? IBM have a number (a declining number) of customers welded to them, but I don't see them winning any new business anywhere! This downward spiral doesn't look like finishing any time soon.
No story here. This happens every year! And having moved from IBM, I love it as it used to be a real pain in the backside there ... but then again, 31st December is their year end. As for being forced to take time off, you are given 25 days of which HP can mandate up to 4! Hardly Draconian! And carry over is at BU discretion too.
Re: ONTAP Performance
The F400 result was for a 4 year old, entry (and now dead) array running only spinning disk and the NetApp result from earlier this year couldn't even beat that despite having some flash! A disappointing outcome, however you try and frame it. I am genuinely interested to see how that claimed performance stands up when you actually start trying to do something useful with the array. People are not going to buy a NetApp array based on pure grunt so if they numbers disappear over a cliff when you turn on various functions then they are irrelevant. Also interested why the 1.3ms response time rather than the assumed 1ms typically used for flash numbers.
This is a joke. NetApp has missed the boat so badly and to then come to market with such an under spec'd machine is laughable. What idiot is going to buy one of these first models? There are about 32 startups and about 5 big vendors with considerably more advanced offerings! It's surprising that NetApp have allowed themselves to get in this mess really. The end is nigh!
Re: look at their products and more specfically their customer support
Hodge Podge Portfolio? HP pretty much only sell 3PAR. There are a couple of other products around the edges but HP probably has the least 'hodge podge' portfolio of anyone. Even NetApp that used to have the cleanest portfolio has acquired and is developing net new products.
The bigger these claims of capacity savings become, the more the scope for a significant mis-sizing! As we all know, 'your mileage will vary' so the bigger that claim, the bigger the margin for error! Given the skepticism that even still exists in some corners around technologies like thin provisioning, I can imagine many IT shops having significant concerns over sizing, especially when close to the upper limit for these relatively small AFAs.
Why do we need to have a distinction for SSA vs arrays that can have HDDs installed in them. People don't buy a SSA because it is a SSA, they are looking for an array with certain performance characteristics and in the current market that is most likely to be met using SSDs but not necessarily just SSDs. Gartner talk about the ability to put HDD in the same array as SSD as if it is a bad thing. Surely flexibility can only be a good thing? If you can deliver the required performance without having to go for a completely separate silo of storage then why wouldn't you?
Re: Is there a comparison between the different flash vendor offerings?
Joking aside, with dedupe 3PAR now looks the real deal. One of the most scalable flash offerings, very very quick (certainly on a par with all but the most rapid of AFAs), fully functional, 6 x 9's, and flexible in that can have hybrid options as well. Very few companies will have just flash arrays (for some time) so if you are going to have a mixture of flash, hybrid and disk arrays, then 3PAR avoids the need for separate silos.
Re: Why AFAs are small
If you're spending money on an AFA then you do so for a reason and you want to thrash it ... so it would concern me that doing just that could cause problems!
And in terms of why they are so small, your second point is irrelevant as you're first point says that they can't currently scale due to technical limitations.
And as for your second comment, the AFA's are saying that flash is cheaper than disk so why should that only apply to smaller requirements? If it's cheaper, it's cheaper ... surely economies of scale only enhance that for bigger arrays? What you're also saying is that hybrid arrays are the way forward ... flash where it is needed, spinny stuff when it is needed.
So in defending Pure what you've actually said is that its' architecture won't scale, and flash is actually more expensive than disk despite their claims.
And whilst I'm at it ... what is an 'AFA customer'? There is no such thing. Find me any customer that doesn't have a variety of performance requirements across their environment. There is no such thing as an AFA customer. There are customers who have a performance requirement for some of their data sets but those same customers will have cold data sets so are they a 'spinny disk customer' too?
Pure switch off inline dedupe when the box gets busy. And why are all these afa's so small? I know they claim usable capacity as being after dedupe but even then they top out about 100tb. Why not just make a bigger one. Makes you think that there is some sort of scalability issue they are gradually working out.