back to article Kaminario pumps up K2 all-flash array processor speed and SSD capacity

Kaminario has more than doubled array capacity and speed with the sixth generation of its K2 all-flash array, mainly by using higher capacity SSDs and faster controller processors. It's also improved compression and its storage assurance program. A K2 array is composed from one to four K-Blocks, each having two active:active …

  1. Anonymous Coward
    Anonymous Coward

    More detail please Mr Kaminario

    So - the Kaminario performance figures have zero detail on them. What is the IO size used to get their performance numbers?

    I know Pure like to give their number at a block size of 32k as this is closer to reality than the 4k or 8k usually used for performance metrics.

    If we could all run our applications with 4 or 8k block sizes then we would be fine.... but the reality is that application block sizes are variable and therefore these vanity benchmarks are a little meaningless...

    1. Anonymous Coward
      Anonymous Coward

      Re: More detail please Mr Kaminario

      32k is as wrong as 4 and 8k. Most applications use a variety of IO requests.

      Please stop the pure nonsense

      1. twister68

        Re: More detail please Mr Kaminario

        32K is a good average across mixed workloads, in my past experience over 100,000s of customers the average block size was 30K. It gives a good blended benchmark for a mixed workload environment in my opinion.

      2. bitpushr

        Re: More detail please Mr Kaminario

        Agreed. In my experience, the average block size is usually strongly bimodal -- you may get a lot of I/O at, say, 4-8KB, and then you may get a lot of I/O at, say, 32-40KB, with relatively little I/O of other sizes.

        Looking at these and saying "Well, the average is somewhere in the middle of 8 and 40" is, in my mind, not accurate.

    2. Anonymous Coward
      Anonymous Coward

      Re: More detail please Mr Kaminario

      Kaminario provides both latency, IOPS and BW independently of IO size due to it's Adaptive Block Size (ABS) algorythm. This is why I recommend to compare Apple-to-Apple Mr.Mellor, as NetApp AFF is simply YAAFA (Yet Another All-Flash Alike) were I've never obtained enough predictable performance on real life, and Pure is no longer a competitor as it is a Tier1 box scale-up only, none are All-Flash General purpose Software-Defined platform delivering both Scale-Up and Scale-Out. One single word to conclude...try it !

    3. twister68

      Re: More detail please Mr Kaminario

      Funny how the CPU and memory details are always left out, something Nibble and others seem to do as they have few cores and little memory in their controllers to underpin their over inflated marketing performance #s.

      Show me the Cores & DRAM

      Twister

      1. Anonymous Coward
        Anonymous Coward

        Re: More detail please Mr Kaminario

        Gen 6 K2 is reported to have 512GB of DDR4 and 20 Cores of Broadwell per Node. So with 8 nodes (their max configuration) they'll have 4TB and 160 cores of 8-way active processing in the cluster, since the K2 is a symmetric active-active processing engine.

        K2 Gen 5 had 32 cores of Ivy bridge with 256GB of DDR3 per node, so this array has always had quite a bit of processing power--which is one reason why the K2 has always been an extremely high performance array, as anyone who has used one will know.

  2. Anonymous Coward
    Anonymous Coward

    Pure and 512TB...

    Just so we all speak the same language....

    Pure max raw capacity is not really 512TB but 491. Pure rounds UP drive capacity...i.e 3.84TB becomes 4TB.

  3. Anonymous Coward
    Anonymous Coward

    The Great Block Size Scandal

    Can we all just stop with this "XX block size is closer to reality" scam?

    Every vendor picks the blocksize which gives them the best hero numbers for their datasheets. That applies to Pure, Kaminario and *everybody* else. Anybody who believes that Pure use 32k because it's "closer to reality" is a fool.

    There is no reality, folks... only a massive variance of block sizes, read/write ratios and random or sequential I/O requests. Data sheet numbers are from synthetic workloads, but real-life workloads are unpredictable and ever-changing. Trying to pick the "closest" synthetic benchmark to your real-life workload is an exercise in futility.

    1. dikrek
      Boffin

      Re: The Great Block Size Scandal

      Actually, there IS research that links block size to specific apps. More like ranges of block size per app.

      For example, certain DBs will do random I/O in the 8K I/O size range, and sequential in huge blocks, while doing redo log writes in 512 byte sequential appends. All very predictable. Especially if one has the tooling to do this kind of I/O research per application...

      This is why the "average I/O size" makes no sense whatsoever.

      https://www.theregister.co.uk/2016/09/26/getting_nimble_about_storage_array_io_profiling/

      https://www.nimblestorage.com/blog/busting-the-myth-of-storage-block-size/

      Thx

      D

      (Disclaimer: I work at Nimble Storage but "average I/O size" is something that's always irked me).

      1. dikrek
        Boffin

        Re: The Great Block Size Scandal

        Oh - and if you are a Nimble customer, go to InfoSight and look for the new "Labs" tool.

        This will show you I/O histograms per app, per volume or per whatever you want.

        So you can see the true I/O distribution for each application.

        Highly educational.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon