back to article VMware: We're gonna patent hot-swapping your VMs' host OS

VMware looks set to renew its relevancy with a new patent application. The patent application lists inventors Mukund Gunti, Vishnu Sekhar and Bernhard Poess and assigns the patent to VMware. The short version of the patent is that, if granted, VMware will have effectively patented the ability to hot swap a host server's …

  1. Anonymous Coward

    Does this mean...

    That a cloud provider could just yank the OS away from under your application if you failed to pay for a license ?

    1. zanshin

      Re: Does this mean...

      "That a cloud provider could just yank the OS away from under your application if you failed to pay for a license ?"

      They can do that today. They just power down your VM(s). There's no need for fancy migration mechanics in that scenario.

  2. Anonymous Coward
    Anonymous Coward

    "Encapsulating the applications for transport is the easy part"

    Indeed. The hard part is encapsulating the state within the kernel, for example: open files, open sockets, dirty blocks in disk cache, device driver buffers.

    If you can migrate all this then you can do in-place hot kernel swaps, which is really all we need.

    Swapping out the userland is a non-problem, this can already be done - although an application which has the old version of a library open will still need to be restarted in order to pick up the new version of the library.

  3. This post has been deleted by its author

  4. Anonymous Coward
    Anonymous Coward

    Oh Dear

    VMware continue to solve the problems of the last decade despite clear and consistent evidence that the world has moved on. Software, new software that is, doesn't need the OS to be hot swapped in many cases because it's designed to be highly available and loosely coupled (buzzword BINGO, I know!) so that patching and other maintenance can be done by replacing rather than managing. Cattle vs Pets as they say, and VMware keep inventing ever more cosy dog baskets while Microsoft, Amazon and Google are creating ever simpler farming solutions.

    1. John Sanders

      Re: Oh Dear

      Amazon and Google maybe.

      Not MS.

      1. Lusty

        Re: Oh Dear

        Not seen Azure then? Functions and Logic Apps are very good indeed for serverless computing.

  5. Alistair

    patent what?

    1. John Sanders

      Yes, there is this niggling issue

      That VMware likes to copy paste from

  6. John Sanders

    >> Nobody wants to reboot these machines for updating

    Then something is wrong with the whole set-up.

    >> If this works, in theory telcos could update switches and routers without rebooting

    So we make the switches & routers more expensive with all the extra hardware, while at the same time we do not solve the conundrum that you require two units for resilience.

    Which leads me to think that these people who can not restart one of their core/whatever for patching are doing it wrong.

    Patching without downtime is a solved problem if you care about it.

    Also moving VMs out of a host is something that happens automatically on most decently set-up virtualization farms, and at most a VM loses a couple of network packets.

    If my memory serves me right VMware excells at this, the vm is first activated on another host then stop on its current host once it is up and running on the other host.

    The only problem that's not solved is when you have dedicated physical hardware for a VM, but no one does that in large scale deployments*, perhaps at home or for testing purposes.

    Containers... we'll talk about it another day, and trust me the problem is not the OS the containers run on.

    I guess VMware can do something clever with this, but I fail to see the "redefining future of IT" here.

    * People who get upset by generalizations usually have low IQ.

    1. Dazed and Confused

      Network switches

      >> >> If this works, in theory telcos could update switches and routers without rebooting

      I remember 20 or so years back having to explain to a support engineer on a training class about the need to restart things after loading certain patches, he was amazed, he'd just joined from a tele switch manufacturer and said they'd been hot patching their switches for years. New kernel on a running switch, no problems, replace core shared libraries? of course why not.

      This is old old old stuff

  7. jeffty

    Good job Microsoft didn't get there first...

    otherwise they'd be "hotswapping" us all onto Windows 10...

  8. Anonymous Coward
    Anonymous Coward


    Isn't that prior art? And AIX can do this also, and though I really have no idea I wouldn't be surprised if mainframes have been able to do this for a couple decades.

    1. John Riddoch

      Re: Ksplice

      Ksplice updates the running kernel in memory, pausing execution while it does so. I assume the AIX system does the same. What VMWare are proposing is detaching a CPU and some RAM from the running OS (certainly Solaris & AIX already support this, I think Linux can on appropriate hardware too) then spinning up a new kernel/OS image on that before passing control of running applications to it. That is arguably more powerful than Ksplice which doesn't allow changes in data structures or platform. Arguably, you could migrate your hypervisor seamlessly from Xen to KVM to VMWare to HyperV using this technology although it would be horrendously complex to do so.

  9. Jan 0 Silver badge

    Re: Ksplice

    No this isn't prior art. As described they want to push a new kernel across the hardware. Ksplice is hot patching the running kernel and adjusting the filesystem to match, so Ksplice is prior art.

    I remember patching a running SunOS kernel on a Sun 4 back in the 90s to allow a WAN application to continue running without interrupting the flow on the WAN. It was a bit hairy as I did it from the command line using adb, rather than by running a patching program. It was only a modest set of kernel changes and I could hardly have altered all the (450KB?) kernel by hand in a reasonable time, but a suitable program could have. I imagine that people did similar things with mainframes in the 1960s, to avoid downtime.

    There will be limits to what you can alter in a running kernel, on current hardware architecture. This is a different way to approach the problem, but will also have practical limits.

  10. Nitin2016

    What if a system call was changed

    Well, what would happen if a system call is changed in the new os. Considering that the application stack was just " sitting there".

    1. diodesign (Written by Reg staff) Silver badge

      Re: What if a system call was changed

      Never break user space.


      1. J. Cook Silver badge

        Re: What if a system call was changed

        That's why Microsoft will never be able to do it- they break userspace frequently, and in ugly ways. They've gotten better, but still...

        1. Anonymous Coward
          Anonymous Coward

          Re: What if a system call was changed

          Are you sure? Windows backwards compatibility is much better then Linux. In Linux, you upgrade the OS and then recompile applications, because everything in user space may have changed. In Windows you don't.

          1. Anonymous Coward
            Anonymous Coward

            Re: What if a system call was changed

            I must be imagining things that I have a binary on my Linux system that was compiled in the mid 90s and still runs just fine. It was something a guy I knew from work at the time wrote that still comes in handy once in a while. Since I never had source and have no idea where the guy is, the binary is all I have. Still works, despite being 20 years old, and Linux changing executable formats from a.out to ELF, and the 32 to 64 bit transition...

  11. batfastad


    > I can gleefully run 5000 containers on a standard 2 socket server today.

    Yes. 5000 containers. Doing fsck all.

    Chuck a bit of work at all of them and send us a picture of your glee!

    I bet your typical webapp would get maybe 10% more req/s throughput contained vs VMed... if that. Benefits of containers are rapid spin up/tear down and full stack of microservices/endpoints on a single host. Not improved throughput.

    Seriously though, happy to be proven wrong of course.

    1. Trevor_Pott Gold badge

      Re: Glee!

      The vast majority of workloads out there do fuck all except eat lots of RAM. CPU utilization in most datacenters - even with virtualization - is pathetic. Containers just give us a way to drive even more density and hope to get slightly better usage from our workloads.

      Whole lot of stuff just wants to sit around waiting for something to do.

      1. batfastad

        Re: Glee!

        I can agree with that. It blows my mind that I can run almost 4,000 VMs in 6U of UCS chassis and still have CPU to spare.

        My point is:

        - Webapp running in a single 64GB VM

        - Webapp running in a single 64GB physical host

        - X hundred containers of your webapp running on a 64GB physical host

        You might improve your CPU utilisation but your webapp throughput will not improve as much as you think just, because, containers.

        1. Trevor_Pott Gold badge

          Re: Glee!

          Where did I say I expected speeds to improve? I just expect to run more idle workloads. If I have 5000 workloads on my box and at any given time 64 of them are doing something, that's a lot. Incidentally, I have 72 logical processors on my 2P server, so I can have that many workloads doing their thing at any given time.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2021