back to article VMware license changes mean bare metal can make a comeback through 'devirtualization', says Gartner

Analyst firm Gartner has published its 2024 Hype Cycle for Data Center Infrastructure Technologies, and added virtual-to-physical migrations – aka "devirtualization" – to its list of ideas that are set to take off, thanks to Broadcom's licensing changes. "As on-premises virtualization projects move from [enterprise license …

  1. Nate Amsden

    started doing this in early 2014

    (vmware customer since 1999) .. While investigating ways of improving performance and cost for the org's stateless web applications I decided on LXC on bare metal for them. My more recent LXC servers go a bit further in that they have fibrechannel cards and boot from SAN. My original systems just used internal storage. I still only use it for stateless systems, that is systems that can fail and I don't really care. I have read that newer version(s) of LXC and/or LXD allow for more fancy things like live migration in some cases?? but I have never looked into that. Management of the systems is almost identical to VMs, everything is configured via Chef, and they all run the same kind of services as the VMs. You wouldn't know you were in a container unless you were really poking around. Provisioning is fairly similar as well(as is applying OS updates), mostly custom scripts written by me which have been evolving bit by bit since around 2007. Fortunately drivers haven't been an issue at all on the systems I have. I recall it being a real PITA back in the early-mid 2000s with drivers on bare metal Linux especially e1000* and in some cases SATA drivers too(mainly on HP Proliants). I spent tons of hours finding/compiling drivers and inserting them into kickstart initrds which were then pxe booted. Only time in my life I used "cpio".

    I adopted LXC for my main server at home back in ~2017 as well, which runs 7 containers for various things, but I still have VMware at my personal co-lo with 3 small hosts there with a couple dozen VMs on local storage. Provisioning for home stuff and management there is entirely manual, no fancy configuration management.

    I do plan to do a migration for some legacy MSSQL Enterprise servers to physical hardware as well soon as the org is opting not to renew software assurance so licensing costs for a VM environment will go way up(as SA grants the ability to license just the CPUs for the VMs running SQL(regardless of number of CPUs in the VM environment), but you lose that when you stop paying for SA), simpler just to consolidate onto a pair of physical servers in a shared nothing cluster. I've never tried boot from SAN with Windows before but from what I read it should work fine..(yes I like boot from SAN, in this case each server will be connected to a different storage array).

    I've never personally been interested in docker style stuff so have never gone that route(I do have to interact with docker on occasion for a few things and it's always annoying). Previous org played with kubernetes for a couple of years at least and it was a practical nightmare as far as I was concerned anyway, I'm sure it has it's use cases but for 95% of orgs it's way overkill and over complexity.

    1. Adam Inistrator

      Re: started doing this in early 2014

      Lxc is superb. All the benefits of virtualisation without any of the downsides of monolithic virtual machines or complexity of docker. Been using it is production for nearly a decade now.

      1. Plest Silver badge

        Re: started doing this in early 2014

        "complexity of docker"?

        If you think Docker is complicated you're in the wrong game, other than WSL, Docker is about the simplest virtualisation platform there it out there.

        1. cozappz

          Re: started doing this in early 2014

          There's also postman, if you don't have root.

          Damn, is so freaking easy to spin up a pod with a stack of servers and provisioning is kid's play.

          But hey, I have more migrations of technologies unde my belt than a guy that started with fortran on ultrix.

      2. Mike007 Silver badge

        Re: started doing this in early 2014

        Creating docker containers for services is by far the easiest way of managing server/service configuration...there are things like kubernetes for more complex automation needs, but for your basic "I want a DNS server"/"I want to deploy this web application" type scenarios I find plain docker/compose to be simple with as far as I can tell no downsides compared to manually deployed servers.

        You can either take a preconfigured application and just supply a config file, or script the creating of an custom environment with the ability to easily test the entire build process. Imagine version control of your server with the ability to roll back to the exact state it was in before the command to install version 1.2.3, instead execute the command to install version 1.3.0, then replay all of the other commands you did after the install to configure it... Oh and you can test the whole process on your laptop with everything being identical to the production environment.

        Anyone who has ever set up a service for the first time and spent several hours playing with the configuration to get it working will know the feeling of getting something working, but on a messy server with unnecessary junk hanging around from failed commands... And no clue what the correct process is for deploying a second server with the same software.

        I have mostly managed to forget what it used to be like crossing your toes as you did a software update knowing that even if you take the time to set up a test server to test the process, it's not going to be identical to the production server. Then deployment involving scheduling downtime for however long it takes you to complete the task. I am used to being able to deploy major software updates during business hours whilst the system is in use, with users assuming their WiFi just dropped out for a couple of seconds. (For certain systems we literally consider this safer than doing it out of hours because we will immediately confirm if the system is working or needs to be rolled back, instead of people turning up at 7am on Monday and everything being broken...)

        1. Dimmer Silver badge

          Re: started doing this in early 2014

          Mike, Interesting and informative post. Will have to play with docker.

          Just by chance, do you work for Boeing? Dropping out for a few seconds for a production switch on live people, sorry, live data can be a bit scary. :)

          1. Mike007 Silver badge

            Re: started doing this in early 2014

            Luckily I don't work for Boeing, the production systems I was referring to are just web apps which fundamentally are all just front ends to a database with something sitting in the middle.

            The basic requirement is that something be Linux based... If you can install it via SSH then you can paste those commands in a Dockerfile. Most server software has a turnkey docker image available so you can just tell docker to launch a PHP on Apache container with such and such directory mounted in /var/www/html

            Basic process for deploying most updates: Snapshot the database, restart container to new version... Let's say it takes 5 minutes to spot a problem, you switch back to the old version and all is good... In the unlikely event you got some database corruption you can roll back to the snapshot and people have to re-enter 5 minutes worth of data - such scenarios are likely to result in hours worth of corrupted data if the testing isn't immediately after the deployment. (in practice we do try to coordinate with downtime/breaks for planned updates)

            The main benefit though is definitely the ability to spin up an identical environment for testing changes then just "move" it to production.

            Unless the application you are deploying is one of those pieces of crap that hard codes the server address in various places scattered all over the database so you can't just snapshot the production data to a test environment... Not naming any specific software, but if there are any wordpress developers around they know what I am talking about...

            1. Dimmer Silver badge

              Re: started doing this in early 2014

              I could tell you are a pro and knew that was not the case, but I guess the down votes did not get I was joking.

        2. Nate Amsden

          Re: started doing this in early 2014

          certainly a fine personal preference. My preference is the opposite, maybe it is just I'm stuck in my old ways, been doing this for about 27 years now, and feel the systems I have run really well so nothing inspiring me to dramatically change things.

          My stuff runs for a long time. A couple of years ago I decided to notice some errors in my personal mail server log from RBLs (Realtime Blackhole lists), and thought it was ironic/funny/crazy that I had two RBLs in my postfix config that went offline 10+ years ago (yet other than log entries it wasn't causing any issues). My mail server config is fairly unchanged in about 20 years now. I still use a software package called sanitizer, which is still in my distribution, though hasn't seen a software update since Jan 2006.

          Of course regardless not everything can run in a docker-style container so there will be some sort of system needed anyway to manage that stuff.

          Last year I finally resolved an annoying docker/kernel/networking issue that had caused major headaches for my org's developers for the past 4-5 years. Another person worked on the issue for hours and hours but no real fix, just some potential workarounds that helped a bit. Issue was with some software downloading dependencies as part of their build process would error out with cryptic SSL/TLS errors. Light bulb didn't go off for me until I was unable to reproduce the error outside of a docker container. But believe me I was still throwing shit at the wall to see what would stick(after about 5-6 hours of poking it) I had no idea never saw that before(I'd like to think I've seen a lot in my time..). In the end, tuning the "net.core.rmem_max" and "net.core.wmem_max" settings (from default of 212992 -> 12582912) in the host kernel resolved the issue. Been using Linux since 1996 and never had to touch those settings to resolve a problem(have seen them mentioned many times over the years regarding tuning, but this was a very simple workload on a 1 CPU server). Even now I have no idea why that fixed it(at the time I was just tweaking kernel options and seeing if they had any effect)..but wow what an annoying issue..

          I don't even know how to classify this issue, I want to say it was a docker problem since it could not reproduce outside of a docker container. But I didn't have to change docker to fix it. I don't want to say it is a networking issue if the issue occurs before the network packet even leaves the network interface. Or is it a kernel problem because I had to tweak the kernel, but if not using docker I would not have to tweak the kernel.

  2. Neil Barnes Silver badge
    Mushroom

    "abundant and affordable energy,"

    Yay! Energy too cheap to meter.

    Again!

    1. Korev Silver badge
      Trollface

      Re: "abundant and affordable energy,"

      Don't be so cynical, we're only a decade away from commercial nuclear fusion!

      As we were ten years ago...

      1. luis river

        Re: "abundant and affordable energy,"

        Certain, N Fusion arrived late, its time now to made quickly modern and advanced Nuclear tech today available e.g. NATRIUM from Bill Gates and others world initiatives...

  3. nematoad Silver badge

    There may be push-back.

    ...immutable infrastructure – architectural patterns that are never changed to enhance manageability and security

    And how long will it be that the likes of Microsoft, Apple, Nvidia et al start railing against "old, outmoded technology" that disrupts the upgrade treadmill and thus fails to deliver the sorts of quarterly returns that such companies have become used to?

  4. Anonymous Coward
    Anonymous Coward

    mmm

    people are still paying and listening to gartners utter bullshit?

    fools and their money comes to mind

    1. James Anderson

      Re: mmm

      100% agree -- ever go back and look at what Gartner were predicting five or ten years ago.

      All there insights were either "the bleeding obvious" or "whatever happened to ....... ".

      But some managers need a couple of consultants to tell them when to go toilet.

      1. Kevin Johnston

        Re: mmm

        The big problem is that Gartner provides lots of pretty pictures/charts which always count for more at Board Level than simply being a subject matter expert (aka the poor bugger that gets stuck with throwing away all the resilience plans to fit things into THE NEW WAY)

        It never ceases to amaze me that companies can find millions and more to move everything to new untested platforms but never find enough to pay someone to fix the broken bits that people have been screaming about for years. But I suppose the replatform will automagically fix it like it says on Figure 4b

        1. Fred Daggy Silver badge
          Devil

          Re: mmm

          In a perfect world, Daggy Inc would publish stupid reports with pretty pictures that are used by C-suite to make arguments and obtain funding. Coincidentally, Evil Daggy Inc would publish reports with pretty pictures, saying the complete opposite and marketing to C-suite. Both however, would be charging a fat fee for the privilege.

          Pictures of our founder on one publication will be me. On the other, will also be me, but with a goatee (universal symbol of evil)

      2. Bebu
        Coat

        Re: mmm

        But some managers need a couple of consultants to tell them when to go toilet.

        The brown stuff coming out the wrong end of the GI tract is a fairly reliable indication, I am led to believe, but most as manglement is generally afflicted with verbal dysentry the consultants are, as always, redundant.

        1. Dimmer Silver badge

          Re: mmm

          And according to those same experts the next move management makes will replace management with AI.

          I know because they showed me the 8x10 glossy photos with detailed inscriptions on the back.

          1. J. Cook Silver badge
            Go

            Re: mmm

            :: awards you an Internet ::

            That's the second time I've seen a reference to "Alice's Restaurant" today. I am impressed.

  5. Charlie Clark Silver badge

    My report – buy it now!

    After increasing its prices for its commodified product line, Company X starts losing customers to competitors with cheaper alternatives.

    Virtualisation itself is done but we will generally pay a reasonable for training and management and monitoring systems.

  6. Anonymous Coward
    Anonymous Coward

    I love all this nonsense, will keep us all in work for years to come, trebles all round

    1. Plest Silver badge
      Pint

      Cheers!

      I'm only a few years from retirement now but happy that the more impressive the tooling gets, the more overly-complicated people seem to be making IT and the more work there is for many of us "shovelling shit" for a very well paid living!

  7. Lee D Silver badge

    "Augmented reality in the datacenter – Gartner thinks tech that "provides technicians with real-time data visualization, remote guidance and interactive 3D models to facilitate complex tasks and maintenance processes" can reduce the time needed for troubleshooting and repairs, and make manuals redundant."

    Well, if that isn't one of the most ridiculous things I've ever heard.

    "Just put on this expensive headset, pay this expensive recurring subscription, to this expensive software, plus updates for every new device or procedures, so that you can follow what we used to print on a slip of paper or put onto a PDF and own forever".

    1. that one in the corner Silver badge

      You've missed the point, I'm afraid.

      > pay this expensive recurring subscription, to this expensive software...

      Of course they'll happily pay those subs. After they followed Gartner's other suggestion, they are no longer paying massive subs for VMWare, which means the managers have to find another sinkhole for money or their budget will be cut and their department will no longer Be Important.

      By buying into AR in the DC, managers can show the Board that they are following Best Practices and Planning For The Future as foretold by Gartner, so the funds will continue to be allocated and the managers will continue to buy Ferraris.

      1. J. Cook Silver badge

        That sinkhole will be the many 1U pizza box servers to run their apps in a de-virtualized world, at least in my industry. (each application is expecting a database server and an app server. while we've forced the vendors to share database servers for the most part, several of our LOB applications are (badly implemented) clusters of multiple app servers, interface servers, and database servers. )

        We did that for a number of years pre-virtualization.

        :: wanders off humming "Circle of Life" from "The Lion King" ::

  8. Bebu
    Childcatcher

    Network digital twins

    The increasing complexity of networks means having an offline model will be useful to test changes.

    No shit Sherlock? Who'd'a thunk it? /s

    Having a model of your complex dynamic system that you might test various changes before pushing those changes out to the production system is a really clever idea. I wonder why no one had ever considered this before? /s

    I wonder what Cisco's CML, GNS3 and EVE-NG do for a crust? /s

    These chaps actually get paid in something other than monopoly money for stating the blindingly obvious?

    1. Mike Pellatt

      Re: Network digital twins

      Came here to say that. Reading it I thought... "Haven't these dorks heard of EVE-NG?" Or that the network engineers at my previous place did exactly that? I mean, it's not as if disconnecting tens of thousands of customers because you'd got your network changes wrong is a good idea.

      Further proof that Gartner is a waste of money and oxygen.

    2. Anonymous Coward
      Anonymous Coward

      Re: Network digital twins

      I read that part of the article and went "you mean like having a development system separate from the live one? Like most big companies have had for years?"

  9. AlwaysInquisitive

    Source link?

    1. diodesign (Written by Reg staff) Silver badge

      Link added

      It's google-able but we've directly linked to it now.

      C.

  10. mikus

    It's funny a company like Broadcom can be so single-handedly polarizing. They're behemoths that drive probably 70% or higher of the Ethernet switching world, but they're just so terrible with anything software, much like Cisco was/is. They bough bad software companies and made them worse with CA and Symantec, the dregs of the enterprise industry at that point in their lives, but buying VMware everyone knew it would be bad - and is!

    VMware is used by enterprise shops with more money and bad admins than management brainpower, usually windows shops that are too scared of linux to use it natively, and Broadcom is betting that those customers are so bad they'll take a good reaming to not complain too much, only they are now. What are windows monkeys to do, actually have to use HyperV finally?!

    If Hock Tan throws a dog a bone, he doesn't want to know if it tastes good or not.

  11. Doctor Syntax Silver badge

    "the trough of disillusionment – the point at which a tech has failed to deliver on its promise."

    The more sceptical of us get there directly without climbing the hump of whatever Gartner say we should have included. Is this quantum tunnelling?

  12. anonymous boring coward Silver badge

    I can't believe "AI" wasn't mentioned in that hype-soup! What will the CEOs think of that?

  13. JamesTGrant Bronze badge

    Data centers are usually either managed by a spreadsheet - sometimes with rack layout using the power of boarder outlines… or alternatively a supa dupa NMS which will ‘even’ report on sever h/w status and network link bandwidths, PSU per outlet currents, etc. Seems to me that outage times are roughly the same regardless….

    (Also, backups, backups, backups!)

  14. Displacement Activity

    Or... just dump VMWare

    Two of my VPS providers (FastHosts/Ionos) are doing exactly that right now. And it looks like exactly what I need - one of them tells me they'll be able to support full-custom images after the transition.

    The statement "Migrating to new hypervisors – which Gartner terms "revirtualization" or virtual to virtual migration – is rated a tech that has reached peak hype as it is applicable to between five and twenty percent of organizations is simply peak bollocks. I dumped bare metal 5 years ago because VPS was way more cost-effective for me, and still is, as long as you don't do anything stupid like paying Broadcom.

  15. Golgafrinch

    Just think of Gartner as an "influencer" ...

    ... except that people are willing to shell out money for its "studies".

    PS I'm currently re-reading Gustave Le Bon (https://en.wikipedia.org/wiki/The_Crowd:_A_Study_of_the_Popular_Mind - if you can read the French original, all the better for you.)

  16. Claverhouse
    Happy

    Fall-Back

    Off-grid power – Essentially privately owned generators yoked to datacenters reduce dependency on the grid, to ensure it's possible to expand. Hydrogen-powered datacenters also made the rising techs list, nuclear fusion rated a mention for its potential to provide "abundant and affordable energy," and Small Modular Nuclear Reactors are also seen as a tech to watch;

    Hopefully these can be commandeered in the event of a power failure, prolly caused by data centres having sucked too much from the regular grid.

    1. J. Cook Silver badge

      Re: Fall-Back

      *yawn* wake me up when those are actual practical things that can be built without it forever being in the prototype phase, or talk.

      Fusion has been "ten years away" for the past thirty.

      SMRs might be in production by the end of the 2020's, or maybe the 2030's- Magic 8 ball says "uncertain". Same with a modern PBR varient.

      Making hydrogen for fueling a datacenter is not *quite* zero-sum; it's being toted as more of a green energy source compared to regular means of generation.

      Solar and wind power is flat out not reliable enough for datacenter operations.

  17. Anonymous Coward
    Anonymous Coward

    Horse poop

    Gartner is forgetting the basics. 90% of the load we have to support is legacy that will never get rebuilt. It gets patched, we strangle it into obligatory OS upgrades. 20 years ago, we were saving tons of cash in hardware and datacenter costs with a 10:1 p2v ratio. Today the cloud providers are the competition. At 10:1, AWS is cheaper. However the hardware has changed significantly in recent years. Now running 100:1 is easy and relatively cheap. Way cheaper than AWS.

    The virtualization platform cost is significant, but it is way cheaper than re-architecting the workload. When you need to deliver rock solid, consistent and reliable performance, VMware will deliver. The trick is to see if you have workload that does not need that performance commitment, where you can use lower grade virtualization platforms. The math also differs if you run tens, hundreds, or thousnds of VMs.

    The good CIOs know the legacy conversion costs. They also understand the virtualization costs. Gartner's analysts only look at the bleeding edge. That only applies to new build software funded by venture capital. For the rest of us, we will focus on excising VB5. We will not go back to 1:1. Not enough datacenter rack space left.

  18. rg287 Silver badge

    We're cutting edge!

    Delighted to learn that the small business I currently work for is well ahead of the curve with our three bare metal servers.

    Take that K8s hipsters!

    1. Anonymous Coward
      Anonymous Coward

      Re: We're cutting edge!

      Sounds like you don't have enough scale for this to really matter.

      1. rg287 Silver badge

        Re: We're cutting edge!

        Sounds like you don't have enough scale for this to really matter.

        Woooooooooooooooooosh.

        That's the joke.

  19. Mostly Irrelevant

    VMWare is doing all they can to drive away their remaining customers. I mostly use virtualization for web servers, why would I even consider their produce when lightweight container orchestrators exist?

    1. sten2012

      Not to mention, other hypervisors that are properly open exist too. And are thoroughly proven to be production ready (perhaps unlike when initial decisions for VMware platforms were made which momentum kept rolling for upgrades).

      Can't just be me thinking I'd be avoiding any closed platform that can be acquired or squeezed for something so critical to the business after everything that's happened

  20. herberts ghost

    The result of "over monetization of hypervisors" might be that CPU vendors may implement hardware partitioning of resouces on their sockets. This would be much like HP Superdome's partitioning of 64 socket systems into smaller isolated systems, each partition with its own memory, IO and cores.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like