back to article Hyperconverged infrastructure. It's all about the services

Hyperconverged Infrastructure (HCI) isn't a product, it's a feature. The future lies in turnkey cloud solutions. This means that there are certain IT services HCI vendors need to bring to the table to remain relevant. At its most basic, HCI is virtualization + storage. You take hard drives, put them into servers and put a …

COMMENTS

This topic is closed for new posts.
  1. baspax

    Software Validated Stacks

    Now that we are seeing hardware stacks converge, the next thing are software validated stacks.

    There are so many players for DR, backup, workload optimization, cloud mobility, cloud management, encryption, compliance, etc. You know, the logo walls in every presentation. There are so many really smart and good companies and products out there, I don't understand why we should be limited to the ideas and development prowess (or lack thereof) of a single vendor.

    In the same way we saw CI like FlexPod, Vblock, and others leverage best of breed tech and flexibility to implement solutions that are better suited to our needs while maintaining some levels of predictable interoperability and simplification of support and operations, we are going to see composable stacks of solutions with interchangeable modules: Veeam or Cohesity for backup? Here is the API and middleware you program against! NSX, ACI, Aviatrix, Illumio for networking? Here is the API and middleware you program against!

    The first vendor to embrace this and create a framework that enables and manages this will grab the majority of market share. Those that try to boil the ocean and build everything in house (looking at you Vmware and Nutanix) will most likely fail.

    1. Erik4872

      Re: Software Validated Stacks

      "Here is the API and middleware you program against!"

      But isn't that a chicken-and-egg problem? Right now, at the height of Dotcom Bubble 2.0, we have 10,000 companies from 5-person startups to Red Hat all pushing framework after framework, API after API, container after container. Unless _one or two_ of these become an absolute rock-solid industry standard -- we're talking RFC-level completely open, no lock-in standard, picking one vendor's framework and toolset is going to lock you into that vendor, and it'll be at the hardware level this time.

      I see the same thing working for a software development company; there are another 10,000 vendors of all size pushing their own magic DevOps toolkit -- as in, buy our containerization framework and we won't just put your apps in containers, we'll containerize the containers and make them cross-cloud capable! What they don't tell you is that DevOps isn't a magic tool, and if you don't have the culture in place (i.e. 20 person startup huddled around a cafeteria table,) then implementing it is much harder than advertised.

      1. baspax

        Re: Software Validated Stacks

        What you wrote is so true in so many ways.

        It's not really what I meant though. In the olden days, certified reference designs were usually pushed by either storage or server vendor, or later, by OS and hypervisor manufacturers. Take Dell or Cisco, here is the reference design for our servers plus network plus Pure/NetApp/Nimble and VMW/Hyper-V/Xen/OpenStack/Baremetal Linux/Oracle/SQL/Hadoop.

        The addition, integration, and validation of say, load balancers, software defined networking, PaaS, DR/workload mobility, backup/archive was always something handled afterwards.

        Now that the infrastructure stack is "solved" the next step is to build reference architectures with these modules instead. Veeam or Zerto or Datos.io? No problem, here is how. In the same way that storage vendors knocked on Dell's and Cisco's door and submitted white papers and validated designs for their products, in the same way many of the more successful software vendors will submit theirs. Actually, they've been doing that for quite a while, most have a HPE or Cisco (and less frequently Dell) validated design.

        The challenge will be in the interoperability testing and creating attractive packages. This has been traditionally the realm of VARs and system integrators on a one off and individual basis but will become more and more standardized. If the vendor's gravity is big enough and they are willing to create an ecosystem like that, it might actually work. Unfortunately, not many vendors are left that can execute due to politics. Dell/EMC is tied to Vmware so they'll most likely ride the Vmware ecosystem instead of building one centric to Dell. Microsoft might do it (and then buy/OEM the better products). I don't see HPE executing with any kind of coherent strategy. Cisco might if they get serious about data center, they have shown to play the multivendor ecosystem quite well in the past. Not sure if it will translate into the software space. We might also see large system integrators pop up with something like a Vblock but based on say Nutanix or any other platform and have standard bolt-ons like AVI for load balancing, Veeam for backup, aviatrix or NSX for SDN. You get the drift.

        Then on the other end of the spectrum we see Amazon trying to write literally everything themselves. DBMS, OS, cloud platform, storage platform, load balancers, you name it! IT'S THE EVERYTHING SHOPPE! Amazon will allow an ISV to develop against their APIs but will then mercilessly copy everything they seem valuable. And of course we have Nutanix, who tries to do the same as Amazon, except that they have way less resources and money and aim to launch ten products at once whereas Amazon started slowly and only ramped up once they achieved mature service delivery (took them years).

        We might of course also see a bunch of acquisitions and each vendor build their own megastack with load balancing, DR services, backup, security, etc integrated.

      2. thondwe

        Re: Software Validated Stacks

        Isn't Azure Stack what you're describing?

  2. nilfs2
    Coat

    HCI is more expensive than traditional SAN + servers

    HCI is more expensive than the tradition architecture, that without a reason, there are no financial benefits for the customer, as an example, i have compared the price of a 3 node Nutanix Xpress block vs a NetApp FAS2520 + Dell servers, $30K vs $20K, if you go with SuperMicro kit (like Nutanix does but charges for it like if they where made out of gold!) instead of NetApp and Dell kit, you can go even cheaper.

    The HCI architecture makes more technical sense than the traditional architecture, I'm not debating that, the problem is that HCI vendors are trying to sell overpriced snake oil with ROI figures that not even their mothers would trust.

    1. Anonymous Coward
      Anonymous Coward

      Re: HCI is more expensive than traditional SAN + servers

      Nutanix is way overpriced. You might want to look elsewhere. Other vendors give you enterprise class servers (Dell and Cisco) for way less $ than Nutanix. Less than Flexpod even.

    2. Anonymous Coward
      Anonymous Coward

      Re: HCI is more expensive than traditional SAN + servers

      I haven't been directly involved with an HCI deployment, but my own theory is that it's making inroads not so much because of lower cost of the solution itself, but so that companies can reduce their pool of "expensive" specialized staff (SAN admins, server admins, network admins, etc) down to a pool of "inexpensive" IT generalists.

      If you could reduce a staff of 4 or 5 specialists down to 1 or 2 generalists because the HCI solution is easier to manage, even if the solution itself is more expensive, then the TCO is lower. Toss the SAN admin making $80,000 and replace him with a generalist making $40,000. That one change in FTE alone saves a recurring $40k a year (more when you factor in other employment expenses), which would cover a lot of HCI's initial expense.

      Not saying I'd go for HCI myself, because if I ran my own company, I'd be more inclined to have those specialists on staff for when the s**t hit the fan, rather than a bunch of generalists. But I understand where a lot of companies would go the other direction and look at HCI.

      It's funny how old technologies keep resurfacing. Years ago, we had DAS, then moved to SANs in the 90's, and now it seems a DAS type infrastructure is becoming appealing again. I guess 10 years from now, it will be SAN 2.0's turn.

      1. Anonymous Coward
        Anonymous Coward

        Re: HCI is more expensive than traditional SAN + servers

        Only these days SANs are designed for generalists to be able to set up and use... EMC Unity being a good example

      2. baspax

        Re: HCI is more expensive than traditional SAN + servers

        You are absolutely right. From the point of view of the business there is no incremental value in zoning a LUN or balancing a storage pool. If it can be done by software, perfect, more resources towards other tasks.

        Like building defenses against crippling ransomware attacks. Every decision maker I talk to speaks of only this.

    3. Jbry

      Re: HCI is more expensive than traditional SAN + servers

      "HCI is more expensive than the tradition architecture, that without a reason, there are no financial benefits for the customer"

      Only if you don't value the benefit of time and expense of your employees managing the HCI. If it takes less time to manage and support...that's worth something isn't it?

  3. Anonymous Coward
    Anonymous Coward

    That and some public cloud bursting capabilities to AWS, Azure and Google Cloud Platform (the big 3).

    Every business under the sun is using or thinking about using one or more of the three services above. If the HCI players want to make "private cloud" work, they are going to have to get around the ability to scale infinitely on demand and allow people to scale out to peak on public cloud to eliminate the utilization issue.

    1. Anonymous Coward
      Anonymous Coward

      DellEMC, Vmware, Nutanix, and Cisco already do that. DellEMC and Vmware burst to AWS (already available), Nutanix to Google (coming Q3 they say), Cisco to all clouds (they acquired CliQr a while ago).

      Vmware is cool as you can run NSX and VSAN in AWS, which makes it easy and saves you two components (network and storage).

      Nutanix has no product yet, although it will include storage in GCP. Microsegmentation/SDN announced but no timeline. Doubt they'll have anything stable soon.

      Cisco's CliQr is a powerhouse supporting every single cloud out there although it's a little more geared towards DevOps. Requires some sort of SDN (will support anything but the Cisco guys will probably like to sell you ACI). Aviatrix seems to be popular with the cloud native crowd. You'll need something to haul data between clouds, tons of products out there. While this gives you the flexibility of choice it's something you'll have to research.

      Cisco's is the most mature, followed by Vmware. Nutanix too early to say, as with everything else, I'd wait until v2.0.1.

  4. John Smith 19 Gold badge
    Thumb Up

    Another excellent (and short) tutorial on this area for the non specialist.

    Thank you once again. One especially intriguing nugget was this point.

    "Virtualization doesn't just mean x86 hypervisors. "

    Now if you want to migrate off a 40YO instruction set design this sounds quite important. It also implies a way to sift out quite a lot of the HCI offerings quickly.

    OTOH

    "Windows is going to keep on storing profiles and folder redirections on SMB until the bitter end "

    Is quite depressing.

    Presumably the later versions are a lot more secure than V 1.0.

    SMB is still a fine example of the former Chairman's policy of "Grab them by the protocols (at all levels) and the customers will follow you anywhere"

  5. Oneman2Many Bronze badge

    Re: HCI is more expensive than traditional SAN + servers

    To be honest SAN has been relegated to specialist workloads like super clusters, mainframe, etc. All flash NAS has enough performance and flexibility for a majority of workloads and is a fraction of the overall price.

    As per usual, IT moves in circles. For high transaction patterns where you want low latency, I see a move back to software managed DASD and get the data off the network. We could even give it a buzz word like composable.

This topic is closed for new posts.