back to article Hyperconvergence 101: More than a neatly packaged box of tricks

In a world of complex technologies and unforgiving business environments, simplicity in IT is good. Technology teams want to get the job done with as little fuss – and as little drain on management resources – as possible. Hyperconvergence promises to deliver that simplicity, but how does it differ from more traditional …

  1. RollTide14


    This is what I love about the Gartner folk....they are always comparing new apples to 8 year old apples/oranges.

    “[Customers have] the ability to start very small (two to three nodes) and grow resource at a very granular level,” he said. “So the minimum investment in HCIS can be as low as $20-30,000, whereas a blade/SAN-based system generally requires an investment of $300,000 or more.

    Most storage systems and compute have the ability to "start small and grow at a granular level". If the SAN /Blade system you are looking at is really $300,000 then there is a ZERO percent chance that the HCI config that will meet your needs is $20,000.

    Maybe not the case if your looking at a VMAX or a high end Hitachi but guess what those things deliver a ton of extra value that your HCI setup can't. HCI is great because it simplifies the management, but you always pay a premium for simplicity whether thats in dollars, lack of flexibility, lock in, etc

  2. CptCodFish

    hyper what...

    Seems like this article has a lot of fluff and no real details on how hyper-converged is really better than SAN or shared storage. I mean to compare a $30K HCI solution to a $300K SAN doesn't make any sense when you can get a SAN for $30K that does exactly the same things and performs better than the HCI solution...

  3. Anonymous Coward
    Anonymous Coward

    Thank you for the explanation...

    I believed hyperconverged was literally like hypersonic, which doesn't mean "sonic through the hypervisor" - because in Greek "hyper" means "over"

    My fault, of course, when some people invent new words without knowing what they do...

  4. Cloud, what..... Sorry... Um... - you just made that up.

    What a load of old tosh

    Hyper-converged is just software defined storage wrapped up in a layer of BS by software vendors trying to steel some of the storage $ from the traditional hardware vendors.

    It is not in my experience cheaper than a traditional SAN, some of the offerings are easier to manage than some of the traditional SAN vendors, some are not.

    It is not necessarily quicker, it may remove certain bottlenecks but it adds others.

    But it certainly full of hype.

  5. Zed Zee


    Hyper-convergence is a breakaway term, being coined by Software Defined Storage (SDS) outfits who do not have enough muscle to develop fully-fledged Hyper-Converged Infrastructure (HCI) platforms or have simply missed the boat. So they settle for SDS but confuse the market by calling it HC, so they can get on the whole 'hyper' bandwagon.

    HCI is the software virtualisation/abstraction of all three main system pillars; compute, network and storage. Unfortunately, up until recently, most HCI vendors (you know who you are), have been only doing compute (hypervisor) and storage (SDS) offerings, while leaving the networking aspect to either a virtual switch (VMware vSwitch or Open vSwitch come to mind) or to good ol' hardware switches. It's only recently that they've started to use Software Defined Networking (SDN), to really push a fully-fledged HCI solution.

    What HCI vendors are finding though is that most customers who have invested in VMware or Hyper-V don't really want to move off these platforms, onto a relatively unknown hypervisor (Nutanix offer Acropolis, which is based on Linux KVM), so they also (like 'true' SDS vendors) drop down to the storage layer and flog their wares as merely SDS solutions. This has become so bad, in fact, that Nutanix/Nexenta and others have started to jettison their own hardware and just put their software pack on top of someone else's machines (look at Lenovo).

    Of course, the best solution to pursue in all this mess is a private cloud, underpinned by an open source SDS solution, which is not based on any proprietary HC/HCI/SDS platform. Otherwise, you're swapping one set of proprietary products for another.

  6. Peter Gathercole Silver badge

    I think I must have a diffrent view of "simplicity"

    In my view, a server is a real server, a network switch is a real network switch, and a storage subsystem is a real storage subsystem. That's simple (even more so if the storage is local to the server as SDS systems appear to be moving back to).

    You get to think about them one at a time, and to scale, you just buy a bigger one of whatever has run out of steam!

    I appreciate that the hardware landscape is simple with hyper-converged systems, but the software installation is not! (and I speak as someone who has used LPARd systems with hypervisors and visualized networks for over 10 years).

    I've often thought one of the real reasons why it's caught on is because it allows the PC vendors to sell ever larger, higher margin systems (rather than cheaper, smaller individual systems) on the promise of overall reduced overall costs or energy consumption. I would love someone to publish a real world study that actually measures these savings.

    You also get to suffer the problem of taking a large part of your infrastructure out of service, because you've got to replace a memory DIMM, processor, or other significant part of your hyperconverged system that is running everything.

    Oh, wait. You need to invest in workload mobility products to overcome that problem!


POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like