back to article NetApp puts everything it's got into a hyperconverged box

NetApp has finally revealed its long-promised hyperconverged appliance. Named “NetApp HCI”, the product pours almost everything the company does into a 2U box, along with four unnamed servers, a cloud-style pay-as-you-go pricing plan and a vCenter plugin so you can manage it without having to learn new tools. The house of the …

  1. Anonymous Coward
    Anonymous Coward

    equipment is...

    SuperMicro.

  2. adamb1

    Looking to the future

    I think we will hear immediately from the established HCI players that this is just a "me too" play from NetApp. In my opinion what sets this product apart from most "1.0" product launches is that the core of the product is version 10 of the industry proven and widely deployed Element OS. This isn't a HyperFlex situation where 75% of the features are missing and an unproven storage platform. The stability, scalability, and feature set of this solution matches anything currently shipping from the more mature HCI products. This is a solid foundation for NetApp to build on (no pun intended) so I expect a lot of good things from this product as it moves past first launch.

  3. Anonymous Coward
    Anonymous Coward

    OEM is SuperMicro

    SuperMicro Twin Squared to be exact. I've had access to one for about a month now ;)

  4. Anonymous Coward
    Anonymous Coward

    As Unimaginative as "NetApp HCI"

    Nothing more to be said about this.

  5. Anonymous Coward
    Anonymous Coward

    Last attempt for architectural Salvation

    @ SolidFire. If this repackaging doesn't work out Kurian will send the rest of them for a "much needed" vacation.

    1. Anonymous Coward
      Anonymous Coward

      Re: Last attempt for architectural Salvation

      Rumors are that SolidFire only has $60m or so revenue per year, basically all of it in the service provider space. This seems to be the last attempt to salvage that division/technology.

      1. JohnMartin

        Re: Last attempt for architectural Salvation

        Rumour has it wrong ... very very wrong ... check through the Q4 earnings transcript for more detail.

  6. BradCraig

    And the swinging begins....

    This should be fun.

  7. Anonymous Coward
    Anonymous Coward

    Not HCI

    Separate compute and storage. Sounds very un-HCI.

    1. Afrojazz

      Re: Not HCI

      And what pros does it give to you to have compute and storage share CPU cores?

      1. Anonymous Coward
        Anonymous Coward

        Re: HCI - the great distraction of our time.

        Depends on the HCI platform. When I was a vSAN customer vSAN only used 2.5% of the CPU on my VDI Cluster so with that 97.5% free you can run quite a few workloads. A lot of storage arrays either have tons of extra CPU sitting around all day, or often use limited core/power processors (I remember having a FAS in 2015 that used Celeron processors). If you have a 20 Core processor in most cases there are plenty of resources (compute and memory) to do other things.

        Now given Solidfire does global dedupe (and so has to keep large hash tables in RAM), does inline dedupe and compression of all IO, and is a bit obsessive on QoS (So that likely means CPU affinity) there may be limits to how much compute can be used for other things. Time will tell...

        1. Arthur A.

          Re: HCI - the great distraction of our time.

          SolidFire guarantees 50K/100K IOPS per node (depends on exact model) and there is no matter what kind of data do you store. No matter can this data be compressed or deduplicated, use you snapshots, clones, replication or not. The same thing with QoS. Minimum performance is always guaranteed.

          I think there is less performance per HCI node as it uses just 6 SSDs, but anyway it is guaranteed and independent from compute nodes that gives you really predictable performance for VMs.

          1. JohnMartin

            Re: HCI - the great distraction of our time.

            Half a dozen modern SSD's shouldn't have too many problems servicing 100,000 4K IOPS, so while NetApp hasn't released detailed performance data yet, I doubt the SSDs will be a bottleneck.

      2. BradCraig

        Re: Not HCI

        Ability to independently scale resources, finer conrol over QoS\performance.....

        1. Anonymous Coward
          Anonymous Coward

          Re: Not HCI

          HCI can do all of that. But this is not HCI.. THis is business as usual with some packaging and marketecture. Flexi-SolidPod anyone?

          1. Anonymous Coward
            Anonymous Coward

            Re: Not HCI

            When Netapp showed us their hotly anticipated HCI architecture, I immediately commented this not HCI! This is exactly the same as CI, in the likes of Flexpod, disguised in a smaller 2U 4node form factor that is so popular with HCI vendors. You call this innovation! This is just repackaging! It should be called Flexpod Micro! It is a shame that Netapp call this HCI. They really have no idea why HCI came about in the first place. Netapp, here is HCI 101 for your benefit - people buy HCI because they do not want to buy expensive traditional external storage arrays but instead wants to use smart software-defined-storage (SDS) made of commodity disks or SSD built into every commodity server. When I first heard that Netapp was working on a HCI, I immediately thought to myself, "Hmmm... they must be modifying the Solidfire code to be an SDS". This should be what you should have done Netapp! Then it would have been a very powerful HCI with a proven Solidfire pedigree! But instead you took the easy way out and use smoke and mirrors to try to dupe us into thnking that you have a legitimate HCI when it is really a repackaged Flexpod! Boo.......Netapp! We are not that dump as you think!

      3. Anonymous Coward
        Anonymous Coward

        Re: Not HCI

        Well, let's say for a minute that there are NO benefits. The point is, separate compute and storage makes it a traditional 3-tier server and SAN/NAS architecture. Definitely NOT HCI. So they fail even at launching a "me too" product. This is "I have no idea what I'm doing" product.

    2. Anon9876

      Re: Not HCI

      If Dell EMC and VMware can call their stuff "cloud", NetApp can call this HCI.

  8. snoggs
    Headmaster

    the most far-reaching innovation announcement?

    That's fine, but does the product itself feature any innovation?

    1. We Haven't Met But You're A Great Fan Of Mine

      Re: the most far-reaching innovation announcement?

      Should read: the most far-fetched innovation announcement...

      Sounds like NetApp are hiring Trump's former advisors.

  9. scrubber
    Joke

    For fun

    Anyone remember IBM Pure?

    1. returnofthemus

      Anyone remember IBM Pure?

      Yes, quite fondly.

      And to be precise they were called IBM PureSystems

      However, these went to Lenovo with the rest of their x86 server assets, you only have look at how the x86 vendors are now fighting for scraps to understand why IBM retreated from this market.

      Whilst VBollox from VCE had a headstart it came nowhere near close in terms of manageability, but as we all know the market for so-called 'Converged Infrastructure was short-lived, the market for hyped-Up Converge will be even shorter.

      Considering even NetApp will be looking to scale compute and storage independently, just shows how short-lived this market will also be, we will have come full circle in a short space of about 10-years

      All the while IBM continues to deliver the industry's premium platforms for both scale-up and scale-out worloads.

  10. Anonymous Coward
    Anonymous Coward

    Just curious...

    Sorry I'm not the sharpest tool in the shed but if storage, compute and VMware is configured/provisioned separately then WTF about this is actually hyper-converged?? Is there actually an HCI/SDS SW stack or are they really just putting compute and storage into a single HW chassis, giving you a vCenter plugin and calling it hyper-converged? Are you f'ing kidding me?!

    1. RollTide14

      Re: Just curious...

      Let me start this post with the always applicable "THERE WILL ALWAYS BE TRADEOFFS" line. By having these components provisioned separately it allows for certain efficiencies that current HCI players aren't able offer.

      My customers who have gone down the HCI route have done so for 1 reason and 1 reason only. Simplicity. You can try and tell me otherwise, but it always boils down to that single reason. Should a customer really care how its packaged if provides the same levels of simplicity (i'm making an assumption that it does ie. single pane of glass, 1 click upgrades...)????

      1. BradCraig

        Re: Just curious...

        I second this RollTide, customers just don't care about whether it's truly HCI. It's SIMPLICITY!!!! For those that haven't dug into the details, see my blog post from this AM for a bit more detail. I haven't gotten my hands on it yet, but I like what I see so far.

        http://withrove.com/blogs/netapp-hci

    2. Anonymous Coward
      Anonymous Coward

      Re: Just curious...

      It's just an HCI wannabe Flexpod

      1. Anonymous Coward
        Anonymous Coward

        Re: Just curious...

        Hyper Converged at NetApp means that the developers now sit in the same building. And while they do not yet talk to each other it is certainly the biggest breakthrough in 25 years.

        They also installed a new PA system in the office buildings to deliver “the most far-reaching innovation announcement in its 25-year history.”

        Perhaps now the Support teams may hear about those new products before the first calls come in.

        "Sales teams are being trained to sell servers and appliances ..." because NetApp is a software company.

        "...and NetApp's channel are being revved up." ... by the competition...

        1. Anonymous Coward
          Anonymous Coward

          Re: Just curious...

          Hyper Converged = employees fly economy class

    3. Anonymous Coward
      Anonymous Coward

      Re: Just curious...

      This one certainly don't follow the usual HCI vendors . The benefit I see here is , Ease of Deployment and management just like any other HCI , No performance Trade off on Virtualization layer .

      Customer dont need to spend on ESX licenses when they need to add storage tier . Customers have choice to go for either a storage or a server node depending their requirement . + the QoS which guarantees minimum performance . So yeah , it may look different than usual players , but I feel its a game changer and it has all the advantage of coming to market a little late .

    4. JohnMartin

      Re: Just curious...

      ElementOS is the software defined storage stack .. this is then packaged in such a way so that the CPU and memory impact of inline storage efficiency and other data services doesn't interfere with compute processing, and reduces VMware / Oracle / SQL server licensing costs. It supports the fine grained incremental scale out of HCI, the simple setup of HCI, the ability for the entire configuration to be managed by the virtualisation admin of HCI, the low TCO of HCI, the fast ROI of HCI, the "4 nodes in a 2U box with integrated storage" of HCI, and the API driven automation of HCI. If you want to disqualify as HCI because the storage software doesn't share CPU cores with the hypervisor (which at scale is more of a benefit than a drawback thanks to the way hypervisor and many other software products are licensed), then feel free, or call it composable infrastructure if you like. The vast majority of people who buy HCI don't buy it because of CPU core sharing, they buy it because of the incremental purchasing, easy installation, low administration costs, and better TCO and ROI than roll your own infrastructure, For them if it walks like a duck, looks like a duck and quacks like a duck, then its a duck ..likewise with HCI, and they won't care about core sharing any more than they care about whether the term duck should be used only for the Genus Anas within the Family Anatinae or whether the New Zealand Blue duck in the Genus Hymenolaimus counts as a "Real" duck, provided they both taste good with a nice orange glaze.

  11. nilfs2
    Meh

    Just another brand selling overpriced SuperMicro kit

    Same SuperMicro kit that Nutanix uses and sells with a price tag so high that is laughable, I expect no less from NetApp.

  12. Anonymous Coward
    Anonymous Coward

    Same old 3 tier with a different name

    I expected a lot more from NetApp given how well established they are and how many smart people they have on staff. To simply repackage an existing 3 tier solution into a different shell isn't impressive at all. Their software integration better be the best in the world else this will fail miserably. But how can you really integrate software that well when you have to rely on VMware? Would be better off getting VSAN, which is actually fully integrated when you buy in VXRail form. Otherwise the established HCI players are the way to go, and now seems like a good time to start asking them about the economics as they are also offering pay as you go models. Looks like too little too late from NetApp.

  13. Jbry

    Maybe I'm missing something, but is there anything ground breaking here? ...trying to figure out exactly what feature they describe isn't a neutered version of what is already available in a Nutanix or VxRail HCI?

    1. JohnMartin

      How is NetApp HCI Superior to Nutanix / VxRail ?

      Disclosure NetApp Employee

      Theres a lot of stuff in NetApp HCI that goes well beyond what you'd find in Nutanix or VxRail, but one of the most obvious is that unlike either of those solutions NetApp HCI is designed from the ground up to run at DataCenter and Service Provider scale rather than being implemented as an edge / point deployment solution (land) with hopes that it can grow without too much pain (expand).

      The majority of first generation HCI solutions end up as point solutions (e.g. VDI) that rarely go higher than 8 nodes. For many, the uncertainty around safely mixing workloads on a single cluster, due to latency and throughput variability in the scale-out storage underpinnings means that each workload gets its own cluster. This either leads to inefficiencies in management and utilisation, or long troubleshooting exercises when applications are affected by "noisy neighbours". NetApp HCI is built on SolidFire technology which was designed for mixed workloads at datacenter / service provider scale. The ability to safely consolidate hundreds of disparate workloads types on a single extremely scalable cluster significantly reduces administration and operational costs. This has been a design / architecture strength of Solidfire from the beginning, and NetApp HCI inherits that. All of this means you're getting enterprise levels of reliability, serviceability, performance and scale in the 1.0 release .. and this is just the beginning.

      There a lot more to it than just enterprise class reliability, serviceability, performance and scale though, there is also the worlds best file services infrastructure and integration into a multi-cloud data management platform, so If you're interested in an in independent architectural framework that helps you do a fair comparison between NetApp HCI and the rest of the solutions on the market, check out the following link.

      http://www.netapp.com/us/forms/campaign/compare-netapp-hci-vs-hyper-converged-competitors.aspx

      Or get in contact with NetApp and ask to get a briefing from one of the Next Generation Datacenter team.

      Regards

      John Martin

      1. Anonymous Coward
        Anonymous Coward

        Re: How is NetApp HCI Superior to Nutanix / VxRail ?

        Um...OK John, thanks for the nice infomercial complete with all the catchy buzzwords, but do you have the ability to cut the marketing BS an actually describe some technical features/details that make this anything new/unique/differentiated from either existing HCI solutions or old-school 3-tier? Cuz right now it looks like 25yr old 3-tier technology with new packaging and some lipstick. Nothing you said above contained any real substance to prove otherwise. It's OK if you want to consult your SE and then get back...

        1. JohnMartin

          Re: How is NetApp HCI Superior to Nutanix / VxRail ?

          If you want a technical description then I'll have to do one of my LONG posts .. and the comment section isn't long enough, so this is the short version .. the three main things that differentiates HCI solutions are the software defined storage layer the hypervisor and the management interface. I'm going to leave the hypervisor question aside for the moment because the vast majority of the market is VMware .. thats not to say that Hyper-V, or various flavours of KVM, or a hypervisorless containerised approach aren't good in their own way, but even if you don't like VMware, most people I speak to agree that ESX (especially when combined with vSphere and the rest of the VMware ecosystem) is the best hypervisor .. though clearly that's not differentiating. Some would argue that Nutanix's management interface is one of its best features, others would say that if you're already committed to VMware, then learning an additional interface just makes the learning curve steeper and that you're better off not having to context switch outside of the vSphere interface .. vSAN approaches do this as does NetApp HCI, so again, not overly diferentiating, and personally I think there's room for both approaches. That leaves us with the SDS layer ..which is both the fundamental enabler of HCI and also it's Achilles heel. Most Gen-1 HCI failures and meltdowns are caused by the limitations in the software defined storage layer .. thats not to say that at the right scale with the right workload vSAN and Nutanix don't perform adequately, but every HCI benchmark I've ever seen had really lacklustre performance results, and that limits the use cases and the scale of HCI deployments. There isn't the space to fairly describe the limitations of the SDS layers in VxRAIL and Nutanix and if you're a big fan of either, you won't appreciate my calling your baby ugly, and they both have strengths in particular use-cases, as someone said earlier, there are always tradeoffs. The SDS layer in NetApp HCI comes from the latest version of ElementOS .. to get a detailed understanding of its architecture, check this old tech field day presentation from 2014 https://www.youtube.com/watch?v=AeaGCeJfNBg ..also worth noting that SRM support which is often a cited concern with HCI has also been around with Solidfire was delivered about the same time and it doesn't require the absolute latest versions of vsphere to work. Since then there have been a number of enhancements including a brilliantly implemented support for vVOL (though you can still use old fashioned datastore if you want, something I don't think VxRAIL will let you do) .. for more information on that check out this series of 5 videos starting here https://youtu.be/4CH3thsRxR8. The other relevant and large difference is support for Snapmirror which allows integrated low impact, backup to cloud and integration with the rest of the NetApp DataFabric portfolio. Going into detail around the superiority of the replication technologies vs either Nutanix or VxRAIL(with or without RecoverPoint) won't fit on this comment thread, but if you're really interested I'll pull up some threads videos and post them.

          The superiority doesn't just come from checkboxes on feature sheets, the devil as you should know is in the details, theres a big difference between a product feature, and a product feature you actually use, and that's why its usually better to ask an expert.

      2. Anonymous Coward
        Anonymous Coward

        Re: How is NetApp HCI Superior to Nutanix / VxRail ?

        Nice command of the <ctrl > c and <ctrl > v from your marketing material.

        It sounds as if you are trying to compete with HCI products from 2013,

        1. JohnMartin

          Re: How is NetApp HCI Superior to Nutanix / VxRail ?

          I use OSX mostly so if it was a cut and paste it would have been command-C .. but as it turns out I write all my own material.

          speaking of which .. I once shot a competitor who was hiding behind an anonymous coward mask in my pyjamas ... how the mask got into my pyjamas I'll never know.

          I'm here all week .. try the fish.

  14. Nate Amsden

    too big, or too small

    my vmware boxes are 36 core (newer systems are going to be 44 core) DL380Gen9 but 384GB of memory is our standard(generally seems more than adequate for our workloads at the moment), with 4x10GbE (2x10G for VMs and 2x10G for vmotion etc).

    So small is..too small.. medium has not enough cpu cores but too much memory, and large has good cpu cores but way too much memory, and way too much storage.

    sticking to unconverged for now anyway

    1. JohnMartin

      Re: too big, or too small

      You can mix and match the storage nodes and compute nodes, so you could have a big compute node combined with one or more small storage nodes. That's part of the rationale behind the architecture because is makes it easier to get the scaling ratios right.

      Seperate scaling of CPU and Memory in a pooled configuration (similar to storage) would be interesting though, wouldn't it :-)

      1. Anonymous Coward
        Anonymous Coward

        Re: too big, or too small

        Funny thing is, you can do that with vSAN.

        HPE (Simplivity) can have compute nodes.

        Nutanix can have storage nodes.

        Only thing this claims to be better is 'QoS'. Yawn. See vSAN 6.2.

        1. JohnMartin

          Re: too big, or too small

          That the other HCI vendors are beginning to sell storage only nodes (I haven't seen them sell compute only nodes though) validates the architectural design NetApp has taken, and the consumption models around them and the rest of the menagerie of mix'n'mach node type seem to be a lot more complex than what's being launched with NetApp HCI, It's also worth noting that most (all) of these approaches require you to purchase additional VMware licenses for the storage nodes and tends to push up the licensing costs of Oracle and SQL server which tend to want to charge for the total number of cores in the whole vSphere cluster just because you might run Oracle on them one day (its dumb, but it happens).

          QoS that actually works and is easy to use / change with floor, max and burst is different than QoS that just does rate limiting and causes unpredictable latency spikes, plus a lot of people are still unwilling or unable to move to the latest version of vSphere.

          Lastly, there's a bunch of other strengths ElementOS brings to the table in terms of performance, scalability, replication, D/R, failure resiliency, multi-tenancy, and the ability to both grow and shrink the capacity and performance of the storage pool non-disruptively.

          Even so, there are going to be times when buying servers that have exactly the right ratio of compute to memory will make more sense than buying one of the three HCI compute nodes, but that's why there are also more traditional converged infrastructure offerings within the Data Fabric .. both approaches have their strengths, you just have to understand the tradeoffs in each architecture.

          1. Anonymous Coward
            Anonymous Coward

            Re: too big, or too small

            So John, if I take out all the fancy buzzwords and boil this down, what I'm hearing is...

            1) all the existing HCI solutions have a bunch of "limitations", that are centered around flexibility, scale, and performance

            2) hey we've got this new unique thing that let's you scale storage and compute independently with QoS!

            3) it's basically a SolidFire storage array + some compute + network...but it's in a new package!

            So we are full circle back to 3-tier architecture and all of the limitations/cost/complexity that comes with it.

            Maybe you can help us understand architecturally how this is different than a FlexPod? What benefits would NetApp HCI provide over that solution?

            And given the very rapid adoption by customers of HCI solutions like Nutanix, Simplivity, VSAN, VxRail...you're telling us there are no benefits/value prop there, that only NetApp HCI can provide??

            Come on man...you're talking yourself in circles

            1. JohnMartin

              Re: too big, or too small

              Which fancy buzzwords are you referring to exactly ? Reliability, Servicability and Scalaibility .. odd because I thought they were infrastructure design goals .. but let me answer you point by point.

              1. all the existing HCI solutions have a bunch of "limitations", that are centered around flexibility, scale, and performance

              Thats a fair characterisation of the first generation of HCI products, though to be fair every architecture has limitations in all these areas. In the case of Gen-1 HCI, those limitations are enough to make most implementations under 8-nodes in a cluster for a single workload before they crank up another one to handle a different workload. It's rare to see VDI and Database workloads on the same cluster.

              2) hey we've got this new unique thing that let's you scale storage and compute independently with QoS!

              From an HCI perspective a high quality QOS implementation based on all flash (which is pretty much required to implement guaranteed minimum IOPS) along with inline storage efficienciesr is a new and unique thing. From a Solidfire perspective this isn't new, but it is still unique in a shared nothing software defined storage product that is proven to work at scale.

              High quality QOS at the storage layer enables scalable, predictable, multi-tenanted infrastructure. There is a direct correlation between the quality of your QoS implenentation and your ability to scale within a single cluster. QoS however it has little do do with independent scaling of Compute and storage. That feature comes from the way ElementOS has been packaged within NetApp HCI.

              3) it's basically a SolidFire storage array + some compute + network...but it's in a new package!

              If you'd also categorise that VxRAIL is just a VSAN array + some dell 2U servers + network, or that Nutanix is just a DSF Array + various compute + network then I suppose that would be a reasonable comparison, but none of those descriptions do justice to the rest of the work all three vendors have done around integration, user experience, workflow simplification and lifecycle management that goes into the packaging of those technologies. Arguably its the packaging that you appear to be deriding that delivers most of the cost savings and simplification benefits that people value in HCI.

              "So we are full circle back to 3-tier architecture and all of the limitations/cost/complexity that comes with it."

              Ok so when it comes to "3-tier architecture" .. I'll channel Diego Montoya and say "I don't think that term means what you think it does" vis <href>https://youtu.be/G2y8Sx4B2Sk</href> most people would argue that its software architecture design pattern thats' proven itself over and over again, unless you'd argue that because the presentation, logic, and data layers are separate and can be scaled independently that this just like most relatively modern datacenter infrastructure design patterns which is a bad thing, and that we should all run monolithic software on mainframes because thats simpler.

              Ok, leaving technical pedantry around terminology aside, given there are people who argue that the lack of core-sharing means Netapp shouldn't use the term HCI,(see my post about looks like and HCI, walks like and HCI, quacks like an HCI) lets take the whole comment.

              "So we are full circle back to 3-tier architecture and all of the limitations/cost/complexity that comes with it"

              No, the compute and storage is designed to be separately scalable, and because of the way it's packaged the limitations/costs/complexity is removed, Thats the whole point to the work done on integration, delivery, packaging etc.

              Maybe you can help us understand architecturally how this is different than a FlexPod? What benefits would NetApp HCI provide over that solution?

              FlexPod (and vBlock for that matter) were built to be large standardised infrastructure scaling units using scale-up storage mentalities that was designed for traditional IT departments. There are trade-offs compared to NetApp HCI. With FlexPod you get the flexibility to chose pretty much any server config you like, and match that with an independently managed storage array sized, configured and generally managed by a storage expert who enjoys talking about RAID levels and LUN queue depths and NFS multipathing etc that aligned with the way many datacenter teams are built. Nothing wrong with that, still works really well for a lot of IT organisations, and there are lots of very large converged infrastructure deployments because it worked a lot better than the usually messy bespoke configurations that people had been doing for their tier-1 apps and virtualised workloads.

              NetApp HCI scales in much smaller increments and is designed to be installed, operated, and run entirely by the VMware admin with little or no storage expertise at all. It helps a lot that Solidfire was never like a traditional array in the first place. It wasn't designed for traditions IT storage / infrastructure people, it was designed for cloud architects building scalable next generation datacenters.

              "And given the very rapid adoption by customers of HCI solutions like Nutanix, Simplivity, VSAN, VxRail...you're telling us there are no benefits/value prop there, that only NetApp HCI can provide??"

              No I never said that at all, the first generation of HCI solutions proved the value of the approach, if there was no value proposition there NetApp never would have invested in this space. What I am saying is that for customers who like HCI but have hit the limitations of their SDS layers and would like something that has better and more predictable and scalable performance, then they should be talking to us.

              "Come on man...you're talking yourself in circles"

              Not really, the message remains the same .. NetApp HCI is better than First Generation HCI for customers who want to save costs by consolidating more workloads into a single HCI cluster with guaranteed performance and better scalability for their next generation datacenter.

          2. Anonymous Coward
            Anonymous Coward

            Re: too big, or too small

            Uhm, maybe you should update your facts. During the Nutanix .Next events they've talked about hundred-strong clusters at PB scales, of customers running monster (20+ TB) Oracle DB's and SAP, not to mention they're (I think) still the only HCI vendor certified by SAP.

            Their management interface is not a management interface, it's more an operations center with pretty nifty intelligence and analysis stuff crammed in there.

            So, where, exactly are these "limitation" or "point solutions" like VDI? This doesn't compute.

            And from your announcement, dunno, but it feels like you just re-invented the vBlock.

            1. JohnMartin

              Re: too big, or too small

              There was a lot of research done in preparation for this, and a stack of NDA presentations to prospective customers .. the feedback was pretty consistent, the majority of Nutanix and VSAN customers hit problems at scale, mostly due to the SDS layer. One of those was to what was probably one of the two or three biggest Nutanix customers world wide .. the big storage workloads all ended up on a traditional SAN, There was also a recent register from Chad Saccac from EMC who pretty much said the same thing for VxRAIL.

              I take your point about the Nutanix Management Interface, its a lot more than that, and its a great piece of technology, arguably the best thing about Nutanix. There are people however that prefer to stay inside vSphere and use the VMware toolkits for most things .. The NetApp HCI UX works really well for people who have that preference.

              Saying we've reinvented vBlock isn't a bad thing either, I respect all my competitors and theres a lot of excellent engineering there for the right use case and IT organisation, but really the most direct comparison to vBlock is FlexPod which is growing rapidly(+20%) while vBlock is shrinking even faster (-30%). NetApp HCI is built to be installed and operated purely by the VM / Cloud admin without any storage expertise, it also scales in much smaller increments.

              1. Anonymous Coward
                Anonymous Coward

                Re: too big, or too small

                Hmmmm....lot's of focus on "gen 1" HCI. So now that we are onto gen 2 (and some might even say gen 3) iterations of HCI does NetApp HCI still differentiate or provide the benefits you're talking about? So basically you're stating that you occupy the space that sits between smaller HCI deployments (8-nodes or larger) and a large enterprise 3-tier solution like a FlexPod/Vblock, providing enterprise scale and simplicity of HCI.

                But wait... doesn't VxRack (and Nutanix for that matter) already provide the same thing (service provide scale, thousands of nodes, millions of IOPs, PBs of capacity, blah, blah)? That was what Chad's article was all about anyway right? And VxRack SDDC supposedly does it natively with VMware. So again just trying to understand what's new/different here. Thanks

                1. JohnMartin

                  Re: too big, or too small

                  I'll try to be clearer so you can appreciate the difference

                  "So basically you're stating that you occupy the space that sits between smaller HCI deployments (8-nodes or larger) and a large enterprise 3-tier solution like a FlexPod/Vblock,

                  No, if thats how I came across, then allow me to correct myself ..NetApp HCI scales well into, the space currently occupied by FlexPod, the small FlexPods (FlexPod Mini) also scale down into the space occupied by the sweet spot for the Gen-1 HCI products. The important thing is that NetApp HCI is not limited to the departmental scale that is the typical implementation of an individual Nutanix or VxRAIL cluster.

                  But wait... doesn't VxRack (and Nutanix for that matter) already provide the same thing (service provide scale, thousands of nodes, millions of IOPs, PBs of capacity, blah, blah)?

                  Not really no .. it might be possible to configure a single VxRAIL based cluster which has some impressive theoretical specs, and then run a homogenous workload that balances nicely across all the nodes in the cluster, but running a real mixed workload typical of a datacenter would probably result in a SDS induced meltdown at some point, especially after a node failure resulting in a storm of rebalancing behaviour. I'm painfully aware that this comes across as unsubstantiated FUD, but I'm not at liberty to directly disclose the results of the interviews and market research that was done when the product was being designed, so I can't substantiate it here. By analogy though, there is a reason why VxRACK (EMCs large scale HCI offering) doesn't use VSAN as it's underlying SDS layer.

                  As I've said earlier, the key differentiation with NetApp HCI is the SDS layer, however if you're interested in the differentiation of NetApp HCI, begin with an overview of SolidFire, keeping in mind that it was built to leverage the best of Flash technology (it didn't start out as a hybrid array), and think about how in the storage world, most hybrid arrays are declining in sales while all flash is increasing dramatically. NetApp HCI's storage technology is good enough to compete with a specialist storage array toe-to-toe and win even without the other benefits of HCO .. I don't think you could argue the same thing for VSAN or NDS.

                  Yes

        2. Anonymous Coward
          Anonymous Coward

          Re: too big, or too small

          you quoted 3 different products , aint it good to have all those features in One !!

  15. Dave 13

    Meh..

    Meh. Too little, too inefficient a supply chain and too much competition. Just another SuperMicro reseller.

  16. Broooooose

    Protecting their install base

    A little late to the game, but for organisations that haven't moved into HCI yet (and there are loads of them) it's another option. Don't see anything particularly interesting, others have been providing some of these features for years. But I don't think this is about net new logos, it's about protecting the ONTAP install base and leveraging goodwill in the end users and channel.

    1. JohnMartin

      Re: Protecting their install base

      Odd, I heard almost exactly the same thing about All Flash FAS .. which two years after release generated the majority of NetApp's 20%+ AFA marketshare ($1.7Billion run rate) and is growing twice as fast as Pure or EMC. If you look at NetApp's SAN revenue growth, (+12.6%) vs DellEMC (-16.6%) or IBM (-12.2%) that shows that a lot of this is into net new logos .. some of them very large net new logos.

      NetApp no longer equals ONTAP .. the industry is in one of those rare times when everything changes, and we've been planning for this opportunity for a while now. NetApp HCI is going to be a big part of that

  17. Liger

    "HCI" - disingenuous positioning at best

    "Is a dream a lie if it doesn't come true, or is it something worse?" -- Bruce Springsteen

  18. Anonymous Coward
    Anonymous Coward

    EVO:Rail 2.0

    It'll do about as well as EVO:Rail 1.0 I suppose

  19. Anonymous Coward
    Anonymous Coward

    will be interesting to see how well their server business performs. good luck to them.

    i hope it does better than the EVO:RAIL mess they tried years back which was probably the most horrid product launch they ever had...sure lets take HCI and slap a storage array to it...defeating the whole purpose of HCI in the first place....

    What they are trying to do is their fabric strategy, a unified SW / management layer across across all their HW and into their Cloud integration and backups. Its a compelling story. I just think they had a couple years of market experience they could have put into this product previously to have a mature offering now. Much like Flash, a lot of catching up to do. Perhaps if they push this like their flash kits, they may also take a lot of market share. Have to hand it to them though, recent years the product releases seem "right". They clearly "drained the swamp" of all the BS they were doing from about 2012-2015. Glad to see its changed

    But late to the game is better than pure storage who still seems to think that on premise flash only, primary SAN storage only is the only thing everybody wants.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like