back to article NetApp patents Hybrid Aggregates, sneers at PCIe

NetApp has Hybrid Aggregate drives coming, with data moved automatically in real time between flash located next to the spinning disks. The company now says that this is a better technology than PCIe flash approaches. NetApp, presenting at an Analyst Day event in New York on 30 June, said that having networked storage move as …


This topic is closed for new posts.
  1. Lusty

    Pie or Art?

    Although they are using the word aggregate rather than LUN, I'm not aware of any vendor that isn't either doing this or about to do this

    1. Steven Knox

      Apple Tarts and Blueberry Pie

      Aggregates are not LUNS. Aggregates are storage pools. They may contain many volumes, each of which may or may not be a LUN.

      Having said that, I agree that this feature is already available from all vendors I'm aware of.

  2. Destroy All Monsters Silver badge

    Yet another patent of a new arrangement of teacups on a table.

    Boring, boring, boring.

  3. Disco-Legend-Zeke

    Mine Already Have Something Called A...


  4. Lorddraco
    Thumb Down

    fancy name but same thing

    Fancy name ...... Hybrid aggregate is so much similar to sub-LUN tiering. Except that the entire aggregate may contain multiple LUNs or Filesystem (16TB??)

    Real time ... this is cool but will there be ping-pong effect? How small the size they will move? There is always a trade-off doing real-time with policy based. each having own pro and cons. In some environment, the block (16kb, 32kb or whatever size they wanted to implement) may be busy now but by the time you move in real time may not be that busy.

    And moving many block up and down the aggregate ... They will need lots of resources as all these movement contribute to more IOPs to the entire system. Well .. 8 core in the horizontal, couple with PCIe 3.0 buses, and multiple socket .. this is possible.

    Having flash straight at PCIe business has it own advantages.. for one .. it is extreme high IOPs and lower latency (going to SAN still be in ms versus microsecond??) versus many other array but at the price of smaller capacity

    But ... fancy name from Netapp again ... where most vendor already doing these technology for a while .. maybe version 3,0 or 4.0 by the time they release this feature.

    By the way .. aint their controller connected HBA also use PCIe??

    1. Anonymous Coward

      Policies Smolicies

      So Compellent need policies to auto tier

      "It's much more automatic, real-time and granular. Compellent needs policies and is not real-time. [NetApp] will be automatic and always move data real-time, rather than retroactively."

      But Netapp don't ?

      "the file system including a policy module configured to make policy decisions based on a set of one or more policies and configured to automatically relocate data between different tiers of the multiple tiers of heterogeneous physical storage media based on the set of policies."

      Sounds like the source close to the situation has already drowned in the cool aid then.

  5. This post has been deleted by its author

  6. Anonymous Coward

    NetApp's innovation and others acquisition

    Feels better when some company works to innovate. Good job NetApp!

    When was the last time the storage leader innovate something special? The great Moshe era.

    After that it has been a copy paste of the same code or acquire. But Bezel keep changing with a V in the front. Hopefully NetApp 8.1 does something disruptive. 8.0 version was not impressive.

    Unified scale-out will make EMC acquire something new or make Isilon support FCSAN.

  7. Roland 2

    Cache needs to be closer to the server to be efficient

    If your disk has 5ms latency and array/network has 1ms, you see 6ms of latency from the server. Let's say some amount of flash will give you a 80% hit rate, with 10 microsecond latency.

    If you place the cache near to the disks, you get 2 ms average latency from server.

    If you place the cache on PCI-e on server you now see 1.2 ms latency, and you alleviate 80% of the traffic on the array on a read workload, so you end up also saving on array iron.

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2021