Are SANS right for converged stacks

This topic was created by Chris Mellor 1 .

COMMENTS

This topic is closed for new posts.
  1. Chris Mellor 1

    Are SANS right for converged stacks

    IBMs PureSystem single product (converged stack) systems are built from servers with DAS and a SAN, a Storwize V7000, inside the rack. Okay. We have block access arrays and we have Fibre Channel and iSCSI but, really, do we need a SAN infrastructure to interconnect a box of hard drives and SSDs with a bunch of servers in the same rack? It seems over-kill to me. Would we set out to design a data store for a bunch of servers, with the data store and the servers all in one rack, this way?

    Why not add more disk drives to the individual servers and use virtual storage appliance software, like the HP P4000 VSA, to aggregate the disks and provide a logical SAN? Then we can ditch the separate storage network. We also don't need to buy storage enclosures and array controllers; that functionality is provided by the VSA software running in the servers.

    The cost of goods go down and the complexity in the system is reduced as well.

    We could even use a PCIe fabric to interconnect the servers and the disk/solid state drives if we wished to drop latency down a notch. Using a SAN for a rack area network seems ... well, dumb.

    Am I smoking pot?

    Chris.

    1. Radek
      Stop

      Re: Are SANS right for converged stacks

      Hi Chris,

      I will specifically pick on the idea of P4000, as I have bashed it in the past already - in a different context though, yet the points raised below are perfectly applicable here:

      "The HP P4000 product range utilises industry standard servers, rather than specialised storage devices. This in turn means that each storage node has a number of single points of failure (SPOFs) in the form of e.g. server motherboard, CPU and/or memory subsystem. As each server has access to its internal (or directly attached) storage only, there is no other way to deliver local HA than to mirror the data across two (or more) local storage nodes. Because storage nodes are stretched across the network, maintaining a quorum is required to issue an automated node fail-over (to avoid so called split-brain scenario where two nodes becomes active due to network isolation). So just two storage nodes are not enough to accomplish this goal – a small virtual machine called Failover Manager (FOM) has to be up and running somewhere on the local subnet for automated fail-over purposes. This obviously increases the complexity of the solution which looks simple only in marketing materials."

      Regards,

      Radek

    2. @hansdeleenheer

      Re: Are SANS right for converged stacks

      the only question I have (for now) is are we willing to use VSA's as production SAN? As I recall from a podcast the P4000 VSA has a xx% performance impact over a physical one wth the same hardware config. And do we have to scale with identical configurations as we have to in the physical clusters?

      On the other hand a VSA gives us way more flexibility to tune the hardware (SSDs, FusionIO, SCSIe, ...)

    3. @jgilmart

      Re: Are SANS right for converged stacks

      Chris,

      When it comes to traditional SAN infrastructure, I absolutely agree. You're adding unnecessary cost and complexity into the overall system. The resulting solution is inflexible and difficult to scale.

      However, there is still value in separating compute from storage: independent scaling, shared access, data protection with lower overhead, etc. The trick is to make that shared storage as simple and as cost effective to deploy and manage as local disk while retaining all the benefits of the network. You can't do it with FC, FCoE, or iSCSI. The key ingredients are Ethernet SAN, a scale out architecture, and off-the-shelf hardware to turn shared disks into "virtual DAS" with low latency, massively parallel acces from every compute node.

      John Gilmartin, Coraid

  2. Marin Debelic

    Pot

    "Am I smoking pot?"

    No, you ain't Chris.

    I asked the same question a few days back on twitter.

    The only reason I can still see the need of a SAN in a converged stack is to provide independent scaling of server unit's vs disk spindles (one to many, many to one etc) within the individual converged stack.

    However, this adds major cost and complexity and given the lowering cost of both disk and servers, the question is -is it worth it? Why not overprovision either (server or disk) side and use as fit togher with a software virtualization layer?

    1. Chris Mellor 1

      Re: Pot

      Interesting set of replies. I'm writing a story about OnApp (UK cloud infrastructure technology provider) which has reinvented the SAN, it says, for cloud service providers and which uses a quasi-VSA approach.

      Coraid EtherSAN idea looks nice. Server SPOF vulnerability looks to be point to be worked around.

      How about this; if we suppose it will be a server/systems supplier developing converged stack systems that re-invents the SAN for a rack-area storage facility because the customers for any startups developing the technology are too few - meaning Cisco, EMC, Dell, HP and IBM - then what is the motivation on these suppliers to reinvent the SAN for converged systems? They can carry on doing why they are doing and not cannibalise existing SAN sales.

      So it has to be an outsider who sees lots of sales to justify the costs in developing the technology. That means sales against the existing SAN suppliers.

This topic is closed for new posts.