back to article Pillar pillages SPC-1 benchmark

Pillar has announced a sparkling SPC-1 benchmark, bettering IBM's Storwize V7000. Oddly EMC is not present in SPC-1 results and NetApp's results are 2008 vintage. What's going on? The SPC-1 benchmark is for block-access storage, not for filers, where the SPECsfs2008 benchmark is used. There are high-end SPC-1 results, ranging …

COMMENTS

This topic is closed for new posts.
  1. IO-IO
    Thumb Down

    Zzzzz so what?

    Seriously. A whopping 70k IOPs?

    Of 92GB configured only 32GB was used.

    With the latest and greatest tech they have managed a Mediocre result.

    Seriously dude, if it's a bank holiday weekend rather write nothing than turn in some complete drivel like you have just released on the interweb.

    Shame on you!

  2. FJ-DX

    strange conclusions

    "A Fujitsu Eternus DX440 recorded 97,488.25 IOPS at a cost of $5.51, good value"

    "For the moment the Axiom is the leader of this particular pack"

    Hi Chris,

    May I add my 5 cents to the claims above?

    The Fujitsu results ar not "good value", they are best value for dual controller storage systems (the particular pack).

    To my knowledge Fujitsu's ETERNUS DX 440 is since we did the benchmark in March 2010 the leader in terms of $/IOPS and also in terms of response time of appx. 5ms at 100% load.

    Best Regards

    Hermann

    1. Storage_Person

      Look Closer...

      Xiotech for one has posted SPC-1 $/IOPS that are significantly lower than those posted by the ETERNUS DX440 so no, I'm afraid that Fujitsu is not the leader.

      And although it seems unfair to critcise a performance-centric benchmark for being one-dimensional there are many other factors that should be taken in to account in addition to $/IOPS. $/GB is an obvious one, and actually SPC-1 is a great place to obtain real-world comparisons as they break down where the storage is 'wasted' (metadata, RAID, sparing, etc.). Others include IOPS/U and GB/U for density, IOPS/W and GB/W for power efficiency, and there are more.

      The other really important point, which has been seen with various benchmark results that have been posted here, is that it is important to be able to scale these results *down*. It's easy to recognise that if a system can generate 100,000 IOPS then 2 of them can generate 200,000 IOPS, but if you only want 20,000 IOPS then will your cost be 20% of the benchmarked result or 90%? It would be great to see something like SPC-1 extended to a range of sizes (for example 20TB, 100TB, 500TB) to see how costs scale as well as performance.

  3. The Cube
    FAIL

    EMC benchmarks

    There are lots of EMC block storage systems that get benchmark tested, the reason why EMC won't let any of those results be published should be pretty obvious.

  4. thegreatsatan

    XIV

    I dont have a firm grasp on the dynamics of what is run during SPC-1 otherwise I'd run it on my own XIV system. I will say that I do get solid performance from it during real world workloads, though I have yet to push it past 30k IOPS on any given day in our production environment.

    1. IO-IO
      Pint

      hmm

      Considering the IOPs rating of the nearline disks used in an XIV, you would need to drive them beyond the spec where latency is optimal and get a seriously good cache hit rate to achieve that result.

      The ATA disks have just over a third the IO capabilities of a similar FC disk so maintaining 30K IOPS is unlikely (and wont be pretty)

  5. Anonymous Coward
    Boffin

    Hmm....

    First the results need to provided for all this data in a tabular format.... Looking at each one is not cool.

    I think it would be interesting to look at the IOPS per disk drive. Because mostly everyone uses RAID-10 and very few use RAID6.

    Also some vendors have included the discounts others haven't??

  6. dikrek

    NetApp does have a recent result

    Hi all, D from NetApp here (www.recoverymonkey.org).

    The NetApp result for SPC-1E is the same as SPC-1 just has extra calculations for energy efficiency. Otherwise it's the same exact benchmark.

    So here's the link:

    http://www.storageperformance.org/benchmark_results_files/SPC-1E/NetApp/AE00004_NetApp_FAS3270A/ae00004_NetApp_FAS3270A_SPC1E_executive-summary.pdf

    or a bit.ly shortened one:

    http://bit.ly/beR5z3

    So NetApp got 68K IOPS with only 120 disks and the disks were 84% full, and using RAID-DP.

    Far better space efficiency than any other vendor in the benchmark (do the math).

    D

    1. IO-IO
      Happy

      Netapp, efficient?

      Mr Netapp,

      The result from the previous Netcr*p (3170) scored 60K IOPS. The latest and greatest 3270 (loaded with PAM cards and all that other whizzy stuff) got it up to a maximum of 68K IOPS.

      In 2 years with a major upgrade all it could muster was a feeble 13% more?

      Wow, that really demonstrates how well it scales.

      Must be super efficient when you end up buying 10 of them to do the job of a single enterprise array.

      But then NAS is the future - but then why have they just bought a block storage company?

      A few mixed messages from the home of Notwork appliance.

      1. This post has been deleted by its author

      2. dikrek

        The NetApp result is all about efficiency

        Mr. IO-IO...

        I encourage you to read the full disclosure from each vendor, so you better understand how things are tested for SPC-1.

        NetApp tries to get the most IOPS with the LEAST number of disks.

        So, the FAS3270 got 68K IOPS with effectively RAID6 and only 120 disks.

        Pillar got about the same IOPS with over 300 disks and RAID10.

        The old NetApp 3170 got 60K IOPS with 224 disks and more latency.

        So, the 3270 got more IOPS with about half the disks.

        I kinda call that improvement :)

        D

  7. Shaun 2

    Timely

    I'm just trying to decide between a FAS3210 and a V7000 for our VDI project.......... Flashcache vs Easytier...............

    They look pretty well matched in the SPC-1E benches, but the VStorage API support tips things in NetApp's favour.

    Storage is complicated!

    1. dikrek

      You can't compare RAID6/RAID-DP and RAID10

      When comparing NetApp numbers to any other vendor, you need to be aware of the fact that nearly all other vendors benchmark with RAID10, yet NetApp sticks to RAID-DP (mathematically the same protection as RAID6).

      RAID-DP/RAID6 have better protection than RAID10.

      So, when comparing, kindly ask the other vendors to show numbers with RAID6, otherwise you are comparing RAID10 (extreme space inefficiency, good performance, good protection) to RAID-DP (good space efficiency, good performance, best protection).

      D

  8. FJ-DX

    a benchmark is a benchmark is a benchmark

    don't compare apples with oranges; the SPC-1 benchmark is in discussion and I was just looking at the publications which have been accepted by the Storage Performance Council and published on

    http://www.storageperformance.org/results/benchmark_results_spc1

    @Storage_person

    yes I have to admit that the Xiotech ISE 2.4 has a better $/IOPS ratio than ETERNUS DX440.

    But this is again a comparison of apples and oranges as you compare an entry system holding 2,4 TB with a full featured midrange system holding 36 TB

    Compare the total IOPS :

    Xiotech 8,102.46 - ETERNUS DX440 97,488.25 with reponse times 7,92 ms vs. 4.83 ms These systems play in different leagues!

    And take a look at Xiotech ISE 9.6 which is the same system with more disks:

    SPC-1 values as documented $6.70/SPC-1 IOPS at 12,603.65 IOPS and reponse time 11,71ms at 100% load.

  9. FJ-DX

    a benchmark is a benchmark is a benchmark

    don't compare apples with oranges; the SPC-1 benchmark is in discussion and I was just looking at the publications which have been accepted by the Storage Performance Council and published on

    http://www.storageperformance.org/results/benchmark_results_spc1

    @Storage_person

    yes I have to admit that the (meanwhile withdrawn) Xiotech ISE 2.4 has a better $/IOPS ratio than ETERNUS DX440.

    But this is again a comparison of apples and oranges as you compare an entry system holding 2,4 TB with a full featured midrange system holding 36 TB

    Compare the total IOPS :

    Xiotech 8,102.46 - ETERNUS DX440 97,488.25 with reponse times 7,92 ms vs. 4.83 ms These systems play in different leagues!

    And take a look at Xiotech ISE 9.6 which is the same system with more disks:

    SPC-1 values as documented $6.70/SPC-1 IOPS at 12,603.65 IOPS and reponse time 11,71ms at 100% load.

    .

    1. Storage_Person

      More Metrics

      No idea what you mean by "meanwhile withdrawn" for the Xiotech result, I don't see anything to note that on the SPC website. But anyway...

      Your point was the $/IOPS, not total IOPS, but as you're talking absolutes let's go there. You can imagine putting 12x of the Xiotech boxes next to each other to obtain the same total IOPS at the same $/IOPS number (and hence a better overall $ number), but what about the other way?

      One of the points I made was that it is good for systems to be able to scale down. A common refrain in the storage industry (not picking on you personally here) is that the bigger arrays are more efficient, are cheaper in terms of $/IOPS and $/GB, etc. This is why people were convinced to lay down half a million per array in the first place. If modular arrays are not only cheaper to purchase due to smaller sizes but have better relative metrics then about the only thing left for larger arrays is that they are easier to manage.

      That in itself is arguable with modern technologies to manage multiple arrays as a single entity, but even giving that to larger arrays I'm not sure that is going to be enough to convince people that they need to continue purchasing larger arrays when smaller ones start to make more sense on every metric.

  10. Barry Whyte
    Stop

    devil in the detail

    Chris, as always the devil is in the detail.

    Pillar = 292x 15K RPM drives

    V7000 = 240x 10K RPM drives

    The difference in response curves will be down the the relative rotational latency (and throughput) of the drives.

    1. Anonymous Coward
      Anonymous Coward

      Axiom... more drives, less performance

      Barry, actually I calculate 312 (12x26), giving less IO per 15K FC drive than the 10K V7000 drives.

This topic is closed for new posts.

Other stories you might like