back to article Infinidat puts array to the test, says it 'wrecks' Pure and EMC systems

Infinidat has run performance tests against Pure and EMC all-flash arrays and surpassed them with its array. Back in November EMC compared its Unity 600F array against Pure's FlashArray//m50 and //m70, with 8K and 16K IOPS and other tests. The Unity advantage was substantial and Pure disputed the real world validity of the …

  1. Anonymous Coward
    Anonymous Coward

    infidel

    For a moment I fell off my chair. what is infidel doing here

  2. Anonymous Coward
    Anonymous Coward

    Another misleading benchmark

    So they have 1.1 TB of RAM, 200 TB of flash and they tested with a 200 TB workload. This means that most, if not all, I/O was satisfied from RAM or flash. Yeah, you should get good results! The 480 NL-SAS drives, designed to make the box look "big", were mostly idle in this test. It's safe to say that as a customer approaches high utilization, to contain cost, they will see much higher response times.

    For now, anyone's hybrid machine with Flash (TLC)+NL-SAS is the way to go. The TLC drives continue to increase in size but NL-SAS are seeing only minor increases in capacity. Vendors are integrating data reduction everywhere to help with cost and durability of flash. By 2019, most customers will be buying all-flash. INFINIDAT who?

    1. Anonymous Coward
      Anonymous Coward

      Re: Another misleading benchmark

      In that case by 2019 INFINIDAT will be running FLASH (TLC)+NL-SAS if the costs decrease to the point they are less than HD. It's architecture isn't designed around NL-SAS or any media type. It's software that will take advantage of any media. NL-SAS drives continue to be the low cost media type by a wide margin. If INFINIDAT can deliver better performance vs All-Flash using a lower cost media and those savings are passed on to the customer, why does media matter? Especially when you are getting better reliability, DC consolidation and ease of use. Folks have been saying since 2012 that disk drives were going away. Check the road map from HDD suppliers from an independent source. There are plans for 20TB to 100TB NL-SAS drives, but again, it doesn't matter to INFINIDAT what happens. That's the brilliance in their design and software innovation.

      1. Anonymous Coward
        Anonymous Coward

        Re: Another misleading benchmark

        It's brilliant marketing for sure. "Ultra-high performance" being touted but your employer only benchmarks the flash portion of the array, assuming we're all naive.

        As for HDD's, they are going away and the proof is the huge decline of the 15/10k HDD's. NL-SAS are next, at least for on-prem, where we don't have cloud scale requirements.

        I'll give you this, INFINIDAT's marketing has helped to highlight that EMC's mid-range platform isn't the gold standard anymore and, if anything, Unity (can I still call it that?), is lagging behind several other competitive platforms.

        1. Anonymous Coward
          Anonymous Coward

          Re: Another misleading benchmark

          >As for HDD's, they are going away and the proof is the huge decline of the 15/10k HDD's. NL-SAS are next, at least for on-prem, where we don't have cloud scale requirements.

          How is the huge decline of 15K and 10K drives proof that NL-SAS will go away? The only reason anyone ever used 10K or 15K is because they were faster than NL-SAS. They stored far less, were more expensive, and drew more power, but were slightly quicker.

          Because 15K drives, in particular, are so much more expensive, the differential between them and flash, per TB is not that great, so flash has in effect made them redundant. This is not true for NL-SAS drives. Nobody ever bought them for their ability to provide IOPS.

      2. Anonymous Coward
        Anonymous Coward

        Re: Another misleading benchmark

        This is yet another discussion about sub-components (HDD,TLC,DRAM) of a storage solution . Synthetic benchmark results dont help either - too much fudging in those.

        Frankly, I dont care anymore how its built,what media is in use or what benchmark numbers vendor can show for - I care about reliability, cost and performance in MY environment and on real applications.

    2. Fru.Murphy

      Re: Another misleading benchmark

      From someone who has two of these bad boys in production, I would say they are spot on. Have a look at Storage Field Day 8 for a deep dive into the breakdown of this single cabinet storage array. I run a mix of workloads on this array (almost a dozen 3 node SQL Clusters, approximately 1k windows virtual machines on vmware 5.x, and a Veeam like backup solution) with an average of 70-90K IOPs at 1-3 ms reads and sub ms writes. Three controllers with block and NAS and 1PB of storage (picked the smaller of their storage size); you cannot ask for a more solid, low latency, workhorse. I have no doubt Infinidat will adapt to changing technology, just as Pure Storage and Dell / EMC will do. Don't knock it until you TRY it.

    3. Anonymous Coward
      Anonymous Coward

      Re: Another misleading benchmark

      >It's safe to say that as a customer approaches high utilization, to contain cost, they will see much higher response times.

      Why do you think that's safe? Have you ever used InfiniBox or do you just not understand how it works?

      >For now, anyone's hybrid machine with Flash (TLC)+NL-SAS is the way to go.

      No it isn't. Hybrid machines (formerly known as tiered storage) both store data and do IOPs from each tier. They attempt to put blocks of data which need a lot of IOPs on the faster tiers, but as anyone who has ever used them knows, this only works properly when data access profiles are entirely predictable, which they rarely are. InfiniBox looks at data in realtime, as it comes into the box; IOs are served from DRAM (the quickest place to do this) and flash (not as good as DRAM but as good as the AFAs). Data is stored in a sequential manner (even if it came in as small block random) on the drives and is distributed across them all too. Slow NL-SAS drives are more than good enough for this.

      If you can read and store random data as sequential on disk drives, why the hell would you waste money putting it on flash?

      >The TLC drives continue to increase in size but NL-SAS are seeing only minor increases in capacity.

      Absolutely correct. Hypercars are also getting stupidly quick, but you need to be a billionaire to own one. Once TLC can provide a cost/GB which is comparable then that's where everyone will put their data. And don't spout the data reduction bull; you can perform data reduction on any storage medium.

      >Vendors are integrating data reduction everywhere to help with cost and durability of flash. By 2019, most customers will be buying all-flash.

      Ah right, you just did. The thing is, IT budgets are shrinking. Most CFOs can't actually afford to splash money on flash, and while data reduction can reduce the amount of storage needed it can only do it to a certain extent. Most flash arrays being sold at the moment are in the tens of TB range and of course there are applications which benefit greatly from it. If you can afford to spend 5-10 times as much as you need to on flash storage then go for it. I suspect you're not in control of the finances, however.

  3. Anonymous Coward
    Anonymous Coward

    Great Idea..

    Sounds exactly like Nimble Storage.i wonder why they weren't tested, as they are much, much bigger than Pure...

  4. Pascal Monett Silver badge

    "virtually no IOs came direct from or go directly to disk"

    I sincerely hope, in case of power failure, that they have a very capable battery back-up system with proven failover and generators that kick in to immediately supplement the batteries because otherwise I think there'll be a hell of clean-up job when the power comes back on again.

    I'm sure they have that in mind, though. Just wonder what the figures are on power consumption and time-to-shutdown in case of outage.

  5. This post has been deleted by its author

    1. Big_JM

      Re: block sizes matter

      It's obvious you're a Pure employee because your comment was terrible and makes no sense. Infinidat clearly used the same block size/workload across all three arrays. Just because Pure markets 32K block size doesn't mean they actually use it. Pure markets a lot of things it doesn't do. Why would this be any different? In fact, this isn't even the first publicly released benchmark that's exposed how poorly Pure performs.

      In fact everyone is missing a very important point in all of this: Infinidat KNEW that they could take advantage of Pure & Unity because they know, what most actual technologists know, both of those architectures are single node, CPU bound, based architectures.

      Of course Infinidat's results are "rigged" in their favor. XtremIO did the same thing with its bogus VD Bench script. Of course Infinidat highlighted what their system could do. But that doesn't explain why Pure & Unity performed so poorly.

      Think about it. Why didn't they test a VMAX 250? Could it be because they knew that the VMAX 250--which is also marketed as a "midrange" array--wouldn't suck? Why not a 3PAR? Or an HDS F400/F600?

      They picked these two systems because they knew these systems are popular but both are highly overrated. Even if Infinidat could outperform an HDS or a VMAX or a 3PAR, the results would be MUCH closer than this and wouldn't be enough to differentiate it this way.

      1. JCWCVG

        Re: block sizes matter

        Good points Big_JM. Check out the latest Infinidat blog for more info as well. However, when one compares the price per TB of Infinidat to ANY AFA, the cost advantage is exceedingly in Infinidat's favor. Customers can get affordable storage with better than AFA performance at Petabyte scale, all without voodoo and black magic data reduction techniques. Oh, and 100X better reliability.

      2. RollTide14

        Re: block sizes matter

        For Pure to get blown away by a UNITY box is not a great look......yikes

    2. dikrek
      Boffin

      Re: block sizes matter

      Hi all, Dimitris from Nimble, an HPE company.

      @Vaughn - hey bud, transactional applications do transactional I/O in the 4/8K range, not 32K. http://bit.ly/2oUDNFI

      Are you saying Pure arrays massively slow down when doing transactional 4/8K I/O, and are only fast for 32K? What happens at 256K or above, which is a typical I/O size for apps that aren't doing transactional things?

      I can't comment on the validity of these specific results, but I'll say one thing: unless Infinidat supplies the exact testing parameters, it's hard to be sure the testing was performed correctly.

      In general, when seeing any test, you need to know things like:

      - number of servers used

      - number of LUNs used

      - number of fabric ports (and their speed)

      - server models & exact specs, switch models, HBA models, host OS version, all host tunings that differ from the defaults

      - benchmarking app used

      - detailed I/O blend

      - amount of "hot" data - even with a 200TB data set, if the amount of hot data is 1TB and that fits in RAM cache, the array will likely be fast. If you want to stress-test caching systems, my rule of thumb is: The amount of hot data should exceed the amount of the largest RAM cache by 5x.

      A couple of vendor-agnostic articles that may help:

      http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/

      http://recoverymonkey.org/2015/10/01/proper-testing-vs-real-world-testing/

      Thx

      D

  6. Anonymous Coward
    Anonymous Coward

    rumor has it they also wanted to test 3PAR and VMAX-250, but one was crashing during the test the other they are still installing...

  7. briancarmody

    Brian from Infinidat here. I wrote a blog post about this test that includes some context. Enjoy

    http://www.infinidat.com/blog/all-flash-is-not-fast/

    1. ManMountain1

      I'm still none the wiser, to be honest! You did a test ... we don't really know what it was, but you're saying it was awesome!

  8. Anonymous Coward
    Anonymous Coward

    Infinidat.

    Filling that gap of absolute humor left when HDS stopped publicly with their HDS-math nonsense.

    I present.

    INFINIMATH!

  9. Anonymous Coward
    Anonymous Coward

    Deserves ZERO credibility.

    I'm shocked this article wasn't laughed off the proverbial stage, but hey it got me to comment I suppose. The blog post by BC and this article specifically don't deserve an ounce of credibility. I won't even speak to the test ran by EMC. I had a 7-year old Clarion perform better than what EMC stated Pure performed, which I can undoubtedly say is entirely false (recent testing showed 330K-ish 16k W IOPS on Pure's m20, second gen with well below 1ms latency), but that's not why I'm posting.

    I'm posting because I'm hopeful for a time where tech wins because of tech, and not marketing BS. I've seen it in numerous purchasing cycles with all the OEMs and the misinformation spread here by Infinidat is completely irresponsible, although they're fighting for a piece - that I get. It just comes off as a cry for help.

    Consider this my Jerry Maguire memo.."Have Fewer Clients". Infinidat just happened to be the straw. I want these companies to start getting called out for their BS. Feels like I'm on Facebook during the Presidential campaign.

    1. Anonymous Coward
      Anonymous Coward

      Re: Deserves ZERO credibility.

      Get it into context. Infinidat are saying if EMC come up with a synthetic benchmark that beats Pure's synthetic benchmark, then they too can come up with a synthetic benchmark that will beat EMC.

      Anyone purchasing storage will want to see it in the flesh. Proofs of concept will do this. Generally speaking customers who complete proofs of concept do not want to share their results publicly. They may provide a reference but generally the way to go is to do a test with your own requirements.

  10. Anonymous Coward
    Anonymous Coward

    Lets see here... lots of vendor posts and replies. Do customers care about this? Maybe!

    And, if they do, I'd ask them to not take these numbers at face value. It's vendor jujitsu. IT guys and gals - Do a POC if its that important or refer to tested and documented reference designs with best practices. Because no one uses benchmark configs in the real world nor should they care.

    1. Anonymous Coward
      Anonymous Coward

      >Because no one uses benchmark configs in the real world nor should they care.

      Spot on. But vendors need to do this kind of thing. If one vendor did it, and nobody else did, then that vendor would most likely sell more. Marketing, basically.

      The worst thing you could ever trust is an IOPs figure. How do you double your IOPs? Put two of them next to each other. Easy, and proves nothing.

  11. kn7671

    Infinidat - Test Result Lies

    Lies, Lies, Lies..... 33k IOPS my ass, only if the tests were mis-configured to purposely return poor results.

    At my current employer, we have ten Pure Storage arrays, ranging from M20's to M70 R2's. I can tell you factually, because I performed the benchmarks, that we were able to achieve 293,000 IOPS at 8K, and 242,000 IOPS at 16K executing workload from multiple VMware Guests across multiple ESX Hosts. Even our Unity 600F arrays were able to pull 108,000 IOPS.

    1. Rob Isrob

      Storage Benchmarks - in general

      Back in the day, some of us remember TPC-C... and the I fondly recall the perf debates with our nemesis "The British Champion"... but I digress. TPC-C (yeah, more than storage but a huge storage component) was pretty much gamed and follow-on TPC-D benchmark tried (did?) to get around that. Later SPC came along... same thing. Much to their credit, they keep tweeking and now we have SPC-1 v3, maybe a decent benchmark? I don't know. I gave up following all that - as others point out you really want to POC this stuff. I think that's the whole point of muddying the waters on purpose here... marketing. POC it , it will do decent but the shocking thing will be it will be vastly more affordable. Shoot, Infinibox lists for about a buck a GB.... there is a reason a lot of AFA folks are running in circles and shouting "Squirrel!"

    2. Anonymous Coward
      Anonymous Coward

      Re: Infinidat - Test Result Lies

      >Lies, Lies, Lies..... 33k IOPS my ass, only if the tests were mis-configured to purposely return poor results.

      These were EMC's test results, not Infinidat's. Follow the links to Chris Mellor's previous post, which explains the basis of the test and has a response from Pure. The workload is standard across testing of all three vendors' systems.

      I don't doubt that you achieved the big impressive numbers you claim, but I do doubt you used the same tests that are referenced in the previous article.

      1. Anonymous Coward
        Anonymous Coward

        Re: Infinidat - Test Result Lies

        You are correct as the test EMC ran was built to purposefully break Pure, not to be real world. They claim they followed the IDC testing, but those claims are incorrect for a host of reasons.

        If you actually used that test on Infinidat they way EMC wants then you'd fill the array a few times over, keep the dataset around ~80% full (not 30% like Infinidat did) and then run random reads/writes against it. The Infinidat would start to have cache misses as the dataset is bigger than their DRAM.

        Infinidat pretends like they made it harder by testing 200TB vs the 50TB with Pure/EMC, but the size isn't the important part. It's how the array reacts to random reads from 80% of the total capacity of the system. There's no intelligent caching algo that will be able to handle this as the reads won't follow data locality patterns Infinidat is expecting to see. That's just one reason why the EMC test is BS as IDC recommends implementing data locality into the synthetic workload.

  12. Jackmeov

    Are yo serious? This is so much bullshit. Here is Carmody again writing about nothing. These benchmark is based on 480 drives vs at most 60 flash drive. The data set fits in Infinidat's flash system and it is t sequential IO. The Box only performs with large cache and all 480 drives. If you only need 200TB's then you are screwed. Not so in an AFA.

    Todays world is all about performance, IOPS and Bandwidth. This is something that Infinidat can ONLY deliver with a full configuration. Therefore, the $/performance advantage is gone.

    No doubt, the world will be all FLASH, make no mistake. Reason that in Europe, Infinidat is dying, same in Asia and Latin America.

    Now, with NVMe, AFA's are delivering 5M IOPs with just 60 drives, with latencies under 100us. This benchmark is BULLSHIT...! Infinidat will need to raise more cash to stay afloat and figure out its next steps.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon