back to article HDS gives EMC and NetApp a good benchmark kicking

HDS folks are full of glee after beating EMC and NetApp in the SPEC filer benchmark. The company ran 2- and 4-node all-flash HUS VM configurations in the SPECsfs2008 NFS benchmark. The scores were: HDS all-flash HUS VM 4 x HNAS 4100 node - 607,647 Ops/sec and 0.89 msecs overall response time (ORT) HDS all-flash HUS VM 2 x …


This topic is closed for new posts.
  1. Alex McDonald 1


    A little furry kitten dies in the benchmark labs every time someone quotes IOPS on a SPEC SFS.

    SPEC SFS does not measure IOPS. SANs do IOPS, and the benchmark for them is SPC-1. NFS systems do NFS operations, and SPEC SFS measures them.

    Only 28% of the operations in SPEC SFS are a READ or a WRITE operation (that is, accessing the data). The other 72% of the operations are on meta data (mainly directory information).

    There's a big difference. Please, think of the little kitties.

    1. Nate Amsden

      Re: AAAARGGHHH! They're NOT IOPS!

      good point but it's still IOPS - just file I/Os per second, instead of disk block..

      anyway I think it would of been good had HDS put their best foot forward and run with an 8-node bluearc titan cluster(last I recall they have been able to do 8-node clusters going back to at least 2008 probably further).

      There's no cost disclosures on SpecSFS so there is little harm in going balls to the wall with the biggest/baddest config.

      Maybe the titan (or whatever HDS calls it today) only runs behind VSP these days I don't know.....

      1. Alex McDonald 1

        Re: AAAARGGHHH! They're NOT IOPS!

        What do you mean by "file I/Os per second, instead of disk block"?

        NFS operations are not "file I/Os" since a large chunk of them -- 72% as I pointed out earlier -- are not operations on the file at all. This sort of sloppy thinking leads to no more than the death of another kitten.

        PS; I'm sure all the marketing suits at HDS are really delighted that the rebranding from BlueArc to HUS that they slaved over all these years ago has completely escaped you. Mind you, that's excusable. No kittens died for that mistake.

      2. J.T

        Re: AAAARGGHHH! They're NOT IOPS!

        There is a price on the SPC-1 benchmark, they did an all flash VSP and killed everyone with a significantly lower cost....and a single VSP.

        All of the HNAS platforms can run against the entire HDS portfolio, HUS, HUSVM, and VSP. Yes, the highest level can do 8 nodes.

    2. Anonymous Coward
      Anonymous Coward

      Re: AAAARGGHHH! They're NOT IOPS!

      >SANs do IOPS, and the benchmark for them is SPC-1.

      SPC-1 measures storage systems, not the networks that connect them.

  2. Lusty

    Have I had too much wine, or are there a load of words missing from this article explaining why the NetApp 2 up from the top HDS isn't better than the HDS? Are we really being that specific in acheivements now that HDS are proud to have the "fastest 2 node all flash filer currently submitted for testing to SPEC"?

    1. This post has been deleted by its author

    2. This post has been deleted by its author

  3. Anonymous Coward
    Anonymous Coward

    NetApp is 24 NODES vs HDS 4 NODES

    If you look at the SPEC, HDS 4 node configuration crushes the NetApp's FAS 6240 8 node configuration:

    The higher performing NetApp FAS 6240 is a 24 node configuration:

    While you look at those results, while throughput numbers are interesting, response time is probably more critical and that's where NetApp completely falls apart - even with 24 nodes.

    1. Alex McDonald 1

      Re: NetApp is 24 NODES vs HDS 4 NODES

      I'm not surprised that the HDS box did as well as it did, given that both NetApp benchmarks were submitted in September 2011 (2 years ago) and didn't have the benefit of SSDs. The 6240 is no longer sold.

    2. Anonymous Coward
      Anonymous Coward

      Re: NetApp is 24 NODES vs HDS 4 NODES

      Hmm.. How come there is a 30+% percent degradation when going from 2 to 4 nodes?

      1. Anonymous Coward
        Anonymous Coward

        Re: NetApp is 24 NODES vs HDS 4 NODES

        It's not a 30% degradation, they probably didn't push the 2-node system as far as it could go for fear of degrading the Overall Response Time. They likely wouldn't be able to claim the lowest ORT *ever* if they hadn't underworked the 2-node system.

    3. Lusty

      Re: NetApp is 24 NODES vs HDS 4 NODES

      But then the NetApp had 288TB available so it was more worthwhile having all those IOPS available. NAS generally needs capacity to scale with performance as it's generally used where performance is needed for more users. If you need raw capacity then SAN would be the way to go, and certainly not over Ethernet.

  4. Fritz01

    Some remarks regarding the report and HNAS 4100 benchmarking

    1) The HUS VM All Flash / Two Node HNAS 4100 benchmark delivers 298,648 SPECsfs2008 NFS IOPS, there is a typo in the report of Chris Mellor.

    2) IMHO a qualified Two Node HNAS 4100 benchmarking result should be more than 50% of the Four Node result. The reason for this wrong benchmarking is the use of only 24 of the 32 SAS 6Gbps back-end paths with the 3 configured Flash Boxes DBF. It is much better to distribute the 32 Flash Module Drives to 4 Flash Boxes. For the the Four Node benchmark a symmetric distribution of the 64 Flash Module Drives to 8 Flash Boxes is favourable.

    3) The configuration diagram for the Four Node HNAS 4100 benchmark

    ( --> )

    is somewhat misleading, all four 4100 nodes are connected to the HUS VM, each node with four paths.

  5. Anonymous Coward
    Anonymous Coward


    IOPs Brilliant! Does it also have go faster stripes? What does it do for the customer? Never yet known a customer in the storage arena that only buys something just because it allegedly goes faster. Does it reduce costs and drive operational efficiency.... No, well I'm not interested.

  6. Anonymous Coward
    Anonymous Coward

    Use case?

    Impressive numbers, no doubt - congrats HDS.

    The question I have is around the actual use cases that these benchmarks address, if any. They are really just stating an aggregate number of IOPS an array can drive, nothing more. They aren't talking about IOPS to the same filesystem/application, as in a lot of real-world applications (CAD, multimedia, PACS, scale etc).

    Take for example, a file-based application that requires X number of NFS IOPS.

    In a 4-node cluster, unless each node can service requests to the same filesystem in a parallel/distributed manner, the application on that filesystem will only ever get 25% of the IOPS of the total can be delivered. The higher the node count, the smaller the % in relation to aggregate IOPS.

    It's kind of like saying, I have four cars that each do 250Km/h. According to these benchmarks, that would mean I can drive 1,000Km/h.

    It would be good to see the IOPS-to-the-filesystem figures...

  7. M Debelic

    IMHO, a benchmark without a $/GB parameter in it is superfluous for a real-life situation (unless we are talking about a customer with an unlimited purchasing budgets and decisions are based only on performance).

  8. VMStorageCritic

    Need more info on this...

    It would be interesting to see what the block size and read/write ratios were for this test. That seems to be a big missing piece of information here. Impressive IOPs numbers from HDS either way though.

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2021