back to article Nimble not struck by Lightning: We're fine as we are, thanks

Tiered networked storage array startup Nimble Storage won't be struck by the coming EMC Lightning server flash product because its customers don't need it. The Nimble array uses multiple tiers of storage, including NVRAM, flash and disk, to provide large capacity and avoid disk latency when responding to data IO requests, as …

COMMENTS

This topic is closed for new posts.
  1. Ausstorageguy

    No demand?

    Just because customers are not asking Nimble, doesn't discount the need or want of the customer base (or potential customer base for that matter).

    The idea of having a high speed ultra-low latency in a storage array is wonderful, but unachievable, the reality is that there are many components in between the array and the host which create latency that are beyond the control of the array designer.

    Let's look at an environment where low latency is required and look at what’s in between:

    - The application

    - Data source

    - The operating system

    - File System

    - File System drivers

    - Volume management

    - Volume management drivers

    - Multipath drivers

    - The host bus

    - The Host bus adapter

    - Protocol layering

    - The media from the HBA to switching

    - Switching

    - Protocol layering

    - Media from the switching to the array

    - The arrays own internal magic

    - The disks

    - and back again, each adding latency to data requests.

    If for example, the distance between the host and the array is hundreds of meters (switches in between), then there is a significant impact on latency, no matter what you do to the array, the media latency is out of the arrays control.

    However, if you have the ability to place the data in the host, then you’re about as close as you can get to the data its self, any closer and it’d be in RAM.

  2. Ausstorageguy

    No demand?

    Cont.

    Now, if you could place the data requiring low latency in the host, whilst being able to use the efficiencies (such as tiering and replication), and data not requiring such ultra-low latency tolerances back in the array, then you’re on a win and so if the customer requesting it.

    To take the view of “stores an application's entire working set in the server and takes the primary data storage role away from the storage array” that’s not its role at all.

    Taking data which is very frequently accessed by the host, such as the most heavily accessed proportion of a Database index file and “Caching it” if you will in the host means the host can perform the look-up from within the host and getting the remainder of data from the array.

    If that data no longer requires the latency or performance, this would provide the ability (I assume) to tier the data back to the array and down the tiers.

    Sure, you could place a much lower cost solution such as Fusion-IO, but as a whole the cost for the entirety of data on the flash would far exceed the savings and again, there would not be the level of protection or efficiency offered by a shared storage array.

    Fusion-IO do offer very similar capabilities with ioTurbine and directCache, and arguably better in that its vendor agnostic (apart from using Fusion-IO cards), however, it could be said that it’s not as robust, it’s not array aware, sure it’ll see array LUNs, but isn’t aware of what tiering may be underlining it nor able to provide redundancy against card loss.

    That’s what EMC are doing with Lightning. Smart really.

    Whilst there may not be a huge sub-species of customers out there needing it, there must have been enough demand for EMC to invest the money in it. And you know there will be others who follow suit with arguably better or similar solutions, such as tiering from host based SATA/SAS and PCIe SSD, but it’s a start.

    Aus Storage Guy

This topic is closed for new posts.