back to article Hitachi smashes SPC-1 benchmark, boasts: We HAF ways of crushing 2 million IOPs

Hitachi has pushed past the two-million IOPS barrier in the SPC-1 benchmark, with a flash-accelerated VSP claimed to be the fastest-performing storage array ever. The all-flash VSP G1000 array used HAF (Hitachi Accelerated Flash) modules to score 2,004,941.89 SPC-1 IOPS, 62 per cent faster than the previous fastest system, a …

  1. CheesyTheClown

    Nice and slow!

    And kids, this is why we don't buy traditional SANs anymore. They're slow!

    VMware is claiming to be able to hit a peak of 7 million IOPs with their virtual SAN solution and with proper systems like Windows Storage Spaces or with OpenStack Swift, you can go way way past that.

    The bottleneck in storage performance is centralized controllers like those found in SANs. They just aren't fast. Storage performance is bottlenecked by where dedpulication hashes are calculated. A SAN always does it at the storage controllers. VMware virtual SAN is just a distributed block based dedup file system. It's also a major hazard or threat during storage failure. Windows Storage Spaces and OpenStack Swift spread the load much broader and as a result are much much faster.

    Using Storage Spaces or Swift, it should be cost effective to enable a storage tier at top of rack using two mostly flash based servers. Then a mostly spindle based tier made up of near-line storage for the entire data center. This means that where VMware caps at about 90,000 IOPS per node, a storage spaces or swift system can do 500,000+ IOPs per rack and carry dedup across the wire where VMware doesn't carry it to the next tier. VMware's performance is a dog with fleas because of silly cluster limits imposed on ESXi and Virtual SAN. Azure and OpenStack don't have those silly limits tied to storage. As a result, it scales much further.

    Even with the stupid design of VMware vSphere 6, it can still scale to about 6,000,000 IOPs or 7,000,000 if the moon is just right per 64 blades. Storage spaces and swift will scale far past that. In fact.

    So, using a SAN from Hitachi, NetApp, EMC, etc... is just a waste of money.

    1. Anonymous Coward
      Anonymous Coward

      Re: Nice and slow!

      Wow! That's some great marketing right there! But marketing guys, it's time to go home now, and when you care to support your statements with tested, available benchmarks and/or architectures, I might take a minute or two to investigate. All I see now is marketing and no proof.

      Hmm, why not let microsoft and vmware, these storage luminaries submit their own spc benchmarks? Then we'd have it black on white. =)

    2. dikrek
      Stop

      Re: Nice and slow!

      Hi all, Dimitris from NetApp here.

      7 million IOPS means nothing. What kind of IOPS? What latency?

      Grid architectures always were able to get big numbers but latency typically takes a hit.

      Perchance read a primer on storage performance:

      http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/

      Storage-agnostic.

      Thx

      D

      1. chrismevans

        Re: Nice and slow!

        Dimitris is correct, the issue is always the quality of that I/O. Imagine what happens when a component fails - does the Hitachi/VMAX/NetApp device stop servicing I/O while disks rebalance? How about the open source solutions? Even 1 second outage at 1m IOPS means a disaster for your application. All of these discussions are irrelevant if you can't manage failure and recovery situations with minimal impact. Until these problems are resolved, none of the open source solutions will be suitable for true enterprise applications.

    3. Anonymous Coward
      Anonymous Coward

      Re: Nice and slow!

      "And kids, this is why we don't buy traditional SANs anymore."

      If we completely ignore all of the enterprise grade features missing from the two you mention and concentrate solely on performance (theoretical in your case) then please show us some proof of your claims.

      Where's your Storage Spaces (chuckle) or Swift benchmark for SPC-1 or equivalent ?

      Without it your badly informed post is merely a very poor attempt at marketing, in fact there's so much wrong with your post I can't even be arsed to pull it apart.

      BTW well done HDS..........pretty much undermines the "true AFA designed for flash" argument.

    4. pallaire

      Re: Nice and slow!

      Hi Cheesy,

      First a disclaimer, I am an HDS employee.

      Careful in drinking the ServerSAN KoolAide, I love VMware VSAN and what they do but be real when making performance comparison… Most vendors performance claims are read only I/O or at worst theoretical…. I encourage you to look at vendors' tech docs on VSAN. See HP (http://h20565.www2.hp.com/hpsc/doc/public/display?docId=emr_na-c04409909-1&docLocale=en_US) as example, "HP claims using four HP ProLiant SL210t Gen8 server nodes, 100% read test can achieve around 150-160K IOPs while 70% read can achieve around 55K IOPs." Why stop at a four nodes cluster if it so easy to scale to 7M IOPS?

      Keep in mind that SPC-1 test is similar to a 60/40 read/write workload, so a four nodes VSAN under a similar SPC-1 workload would do much less than 55K IOPS.

      Until we see VSAN deployment with 40 Gb Ethernet and/or converged Ethernet with RDMA support, the network will remains the bottleneck and large vSAN cluster scaling with real application will remain a dream.

      Performance comparison is a tricky business, be careful in taking at face value any vendor claims without proof…. There is a reason why big storage buyers perform vendor bake-offs with their own apps… History has shown that most vendor claims is just hype to get attention and do not live up in the real world. At least SPC-1 is public, detailed and a way to get an important baseline.

      Patrick Allaire

    5. Dave Hilling

      Re: Nice and slow!

      Unless you don't need 7M iops then buy the system that meets your needs. We hit up to about 200,000 iops out of our storage and we only every actually need or hit that when we purposefully run benchmarks. 95% of the rest of the time its abysmally underutilized but its dead simple to use and meets our needs serve 400+ VMs mostly under 1ms latency and even if we had 1M iops we would pretty much never hit it. Yes I know some places can and do need insane performance, but that isn't say 95%+ of work loads.

  2. realitystynx

    Who would use

    Would you buy something when you can only use 54% of total purchase? Purchase 114TB of storage to get 30TB?

    Also, the $2million isn't list price that is discounted, so you should probably edit the story

    1. Nate Amsden

      Re: Who would use

      yeah discounted to the tune of 58% off hardware and 39% off software. List price is just over $4.4M. Also they are less than 1% away from being disqualified due to too much unused storage -- 44.31% vs 45% is the max). Still very impressive results in any case, even if it does take up two cabinets :)

    2. Anonymous Coward
      Anonymous Coward

      Re: Who would use

      You have misunderstood some of the numbers.

      Yes, it is raid 10. Meaning that you can only use 50% of capacity. This is how it works for all raid-10 systems.

      Of the then available capacity 30 Tbyte has been provisioned for the test. This doesn't mean that you only get 30/110 usage. It simply means that you have available capacity at hand, that hasn't been provisioned for usage.

  3. Stephen McLaughlin

    I understand these tests are in a controlled environment but these numbers are very impressive.

  4. thegreatsatan

    Eclipsed and Irrelevant in 12 months time

    That's where this system (and many others) will be from a market perspective. When technologies like DSSD eventually come to market (and similarly architected solutions) these old dinosaurs will be regulated to the scrap heap. The plain truth of it is, very few customers actually need those performance numbers. And for the performance they do require, they can gain that a fraction of the cost of this array, or any of the other monolithic boxes designed in the 90's.

    Storage vendors need to drop the speeds/feeds crap marketing that they have used for the last few decades and focus far more on the actual benefit that their products bring to an organization, that will drive sales of their product. Everything else is just chest thumping crap that no one who pays for these products cares about.

  5. Anonymous Coward
    Anonymous Coward

    > Hitachi's result means that EMC's VMAX and IBM's DS8000 arrays, the only comparable high-end systems, now have a performance mountain to climb when engaged in competing bids with the GSP.

    Apart from with customers who know that these numbers mean bugger all in the real world.

    1. Anonymous Coward
      Anonymous Coward

      What and EMC's made up stuff is more accurate ?

  6. Captain DaFt

    "The top SPC-1 IOPS benchmark results"

    Really? Looks like Ikea designer bookshelves to me.

  7. P. Lee

    Beware Over-Consolidation

    It is not an end in itself.

    It is ok to split your workload up by some arbitrary algorithm, keep a directory of what is done where and rebalance if required. That might be a little cheaper than putting in flash monsters and 40G networking links everywhere.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like