back to article Software-defined traditional arrays could be left stranded by HCI

A rising tide lifts all boats and the Nutanix IPO signals that all the hyper-converged infrastructure product boats are going to get a lift. Where does that leave software-defined storage (SDS) – stranded on a mudbank? If your business is providing cheap storage arrays through having customers run your array, controlling …

  1. Marc 25

    If you're a storage vendor and you haven't thought of this by now, you're already dead. Any vendors worth their salt should already have a plan in place to provide some sort of HCI offering.

  2. elliottmichael

    Not so fast

    One point where traditional storage vendors can stay relevant is when they allow access via RESTful API's. It's the only way they can stay relevant in a SDDC.

  3. Nate Amsden

    as a non HCI customer

    I have no interest in HCI and prefer best of breed solutions which for me are proliant, 3par(fibre channel) and vmware. I also do LXC too on bare metal proliant systems. The uptime on some of my storage is longer than some HCI vendors have been shipping products.

    HCI sounds great for edge and branch office (internal)IT. Though I've been on the SaaS datacenter production side for more than a decade. HCI has no value for me in this space.

  4. NBNnigel

    "Hyper-converged infrastructure (HCI) systems combine servers controlled by or running hypervisors converged with storage and networking."

    So... it's the whole server bundled as a vendor-specific appliance? Sounds awesome, especially if you're a vendor. And I guess it would be easy to scale, but probably not very cost-efficient. And given that economies of scale still rule the day in data centres...

    1. dandre83

      It can be, many HCI vendors are sold as appliances which are difficult or impossible to scale unless you purchase more appliances.

      Maxta sells a software-only hyper converged solution that allows you to run hyper convergence on your choice of X86 hardware and hypervisor (VMware-centric but supports others as well). With a software-only model you can scale within a node or within the cluster, you are limited only by the capacity of the servers themselves.

      Full disclosure I am a Maxta employee.

      1. NBNnigel

        sounds sensible

        To me, that sounds more sensible. Although I have to admit it sounds suspiciously similar to having a bunch of commodity servers configured as a cluster via some sort of orchestration/scheduling software (like Maxta?). If this falls under the definition of HCI, then I wonder if HCI is just another marketing buzz-phrase floating around in enterprise-tech vendor world ("You should buy our HCI appliance... er I mean 'solution'. You can't fight synergy... er I mean HCI... it's bigger than all of us").

        Frankly, I can't help but think that some of the hype around HCI is just the latest attempt by 'hardware integration' vendors (i.e. appliance makers) to stave off the commoditisation of their market. Or, in other words, just another way to vendor-lock customers. I can see how SMEs might benefit from purchasing a HCI appliance when the savings from low administrative overhead is bigger than the efficiency cost of capacity mismatch and scaling 'lumpiness'. But surely we're only talking about 20-30 percent of the total market, at most?

        It's also interesting to think about all of this in the context of two longer-term trends in server hardware: (1) the eventual convergence of storage and random-access memory/processing cache (i.e. NVMe being the latest step along that path) and (2) the increasing viability of deploying low-latency, high bandwidth and RDMA capable networks.

        These two trends seem to point in opposite directions. The former suggests 'compute' will eventually be pulled back in to these 'hyper-converged'... things.... because latency between processing and cache makes sequential computation less efficient. But the latter suggests that the components of computation (processing <---> communication bus <----> cache/volatile memory --> storage) can be physically separated, and thus scaled separately, without incurring the sequential computation penalty. Simply put, as internal 'networks' become giant PCIe busses, it makes less and less sense to physically converge all of your computing, memory and storage hardware in to a single appliance.

        So to me, the more interesting question is: when will high performance networking be sufficiently standardised to allow 'hyper-deconvergence', maybe even retro-fitting of existing under-utilised hardware resources? If I were a appliance-maker, or CPU/storage manufacturer for that matter, this would be the question keeping me up at night. The other question keeping me up at night would be: 'how can I best derail high-performance networking standardisation'...

    2. terry.murray@Lumenate.com

      I've spent more than 15 years architecting and implementing storage solutions. Most of that time has been spent at a company that made a living selling storage. For many organizations all of their needs can be met with HCI. For most organizations they would be well served to use HCI for large portions of their environment.

      I know this not because the vendors tell me this but because I talk to customers that are doing it. I see what scaling or refreshing a traditional environment looks like vs HCI. I see the expertise required to do traditional architecture well versus the simplicity of HCI.

      I'm not saying people can't build a better solution on their own. Some can, some can't. I'm not even saying HCI is better, I'm saying it is good enough, it is cheaper and much simpler.

      That's a formula that will continue to gain marketshare.

  5. dikrek

    It's not about HCI per se

    It's interesting to examine why some people like the HCI model.

    Part of the allure is having an entire stack supported by as few vendors as possible. Very few HCI vendors fit that description.

    The other part is significantly lower OPEX. Again, not all HCI vendors shine there.

    And for CAPEX - the better HCI vendors that fit both aforementioned criteria tend to be on the expensive side. So it's not about CAPEX savings.

    It's also interesting that the complexity of certain old-school vendors has quite a bit to do with certain solutions becoming more popular (not just HCI). Compared to certain modern storage systems, you may find that the difference in ease of consumption is minimal.

    Be careful that just because you like the HCI dream you don't give up things that have kept you safe for decades.

    Case in point: several HCI and SDS vendors don't do checksums! (Even VSAN and ScaleIO only recently started doing optional checksums).

    That's like saying I like electric cars, and in order to achieve the dream of having such a car I need to give up on ABS brakes.

    Or things like proper firmware updates for all the components of the stack. Again, many solutions completely ignore that. And that inability can significantly increase business risk.

    More here:

    http://recoverymonkey.org/2016/08/03/the-importance-of-ssd-firmware-updates/

    Disclaimer: I work at Nimble but if I didn't there's only one HCI vendor I'd go to work for. One. Out of how many?

    Thx

    D

    1. NBNnigel

      Re: It's not about HCI per se

      @dikrek

      I think your characterisation (in terms of OPEX and CAPEX) is a very useful way to think about this issue.

      Perhaps one other thing to be wary of: the notion that CAPEX is a 'once-off, upfront' expenditure. In many cases it's periodic (i.e. capacity scaling). And the best case, IMHO, is when CAPEX becomes continuous (i.e. OPEX) and time-bound (I guess what people mean by the term 'elastic'?). Best case for buyers anyway, as the conversion of CAPEX to time-bound OPEX implies the resources have become commodities (highly competitive supply market), incremental (smaller units allow better capacity matching), and temporal (capacity needs vary across time, sometimes fluctuating on a 'time of day' basis).

  6. eric@ Evaluator Group

    HCI in the enterprise

    I must say I'm a little confused after reading this article, probably due to the different ways hyperconverged has been used by vendors, the press, analysts, etc (I'm an analyst with Evaluator Group). We cover all the Hyperconverged Infrastructure (HCI) appliances - i.e. Nutanix, SimpliVity, EMC's VxRail (VSAN), HPE's StoreVirtual-based solutions and about a dozen others. What we're finding is that most IT users are interested in saving time/effort in getting infrastructure up and running, and less about consolidating vendors.

    Also, as much as the HCI appliance vendors want to say they're selling to the "enterprise", we're finding that most of these folks (5000+ employee companies) are interested in a software-only "roll your own" approach to HCI - the the turnkey appliances like Nutanix, etc. The most often mentioned solution was VSAN.

    The other thing we're seeing is that these enterprises consider HCI as another tool in the box, something that will replace some traditional infrastructure, but certainly not all. For many they like HCI because it's a way to offload the care and feeding of some of their infrastructure to the teams using it, like the server virtualization teams and VMware admins.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like