back to article Will containers kill VMs? There are no winners in this debate

Reg readers have a reputation as never being short of an opinion. So, it is with more than a little surprise that we must declare our latest debate, on the motion Containers will kill Virtual Machines, was a tie! 1,142 of you voted in the debate, and the vote was split right down the line. How did we end up here? The …

  1. J27

    Containers are VMs, the whole point of containers is to make managing and deploying code to VMs easier. The core technology is the same.

    1. Brian Miller

      J27 wrote: "Containers are VMs..."

      Uh, what? From Docker: "A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings."

      A CPU VM is a hardware virtual machine, which is supposed to be isolated from everything else by hardware. It is not a package, it is an isolated virtualization of the base hardware.

      One is a package. One is hardware. The package requires a host operating system, and does not stand alone. The VM stands alone.

      As for makes things easier, well, only if certain vendors decide to keep their crap up to date. I work with AWS CloudHSM. The client packages for that are woefully behind for Ubuntu, and that makes a Docker image for Ubuntu currently useless. I just finished switching our Docker images to be based on AWS Linux, as I'm hoping they will keep their own crap up to date.

      Yes, I agree with others, good packaging is something that is overlooked. However, that was something that has been "taught" in the workplace, and when managers with no clue are put in charge, along with "newly-educated" "software engineers" then disaster strikes. Again and again.

      1. Peter Gathercole Silver badge

        @brian re. VMs

        Your view of hardware VMs is very, very naieve.

        Almost none of the VM systems available today, including IBM's PowerVM and their mainframe VM, and certainly not VMware and derived products are actually hardware.

        There were once some real type 1 Hypervisors around (I used Amdahl's MDF about 35 years ago, and I'm not even sure about that), but now, at some layer or other, all hypervisors are type 2, with maybe the hypervisor running on the bare metal, but still very frequently a specialized derivative of a general purpose OS.

        I've seen some of the internals of the PowerVM (the IBM hypervisor for IBM PowerPC systems), and inside the hypervisor, you have a full blown Linux kernel with much of the standard tool chain running as as a black-box turnkey system. I agree that this can be fairly hands-off (but must get involved with sub-CPU VM scheduling), and sets up and uses the hardware features of the platform, like CPU security rings and affinity to VM images, memory encryption, above OS level memory page table control, and many more that I'm not going to enumerate, but it's not a hardware hypervisor.

        The control of all of these feature is by a software layer, and it's security is only as good as the security of that layer.

        I've actually commented on this before. Putting Hypervisors in the stack just moves the stack down. The hypervisor replaces an OS and schedules multiple OS images as if they were applications, and everything else moves down a tier. Containers on VMs just adds yet another tier.

        Moving to containers, and particular Docker. This has the concept of the Docker Engine. The Docker Engine in it's simplest form is a VM system or possibly you could call it an OS Binary Interface (can I coin OSBI as a term?), This is what allows you to run applications from other OSes than the hosting OS (as opposed to, say Solaris Containers, where the application still has to match the hosting OS). I admit that it is stripped down to it's barest minimum, with the required support layered on top of this minimum, and may not actually be derived from the OS environment that it provides, but it's purpose is to allow an application to run in an apparent OS environment, with some separation from other containers. This sounds so similar to a VM to me that the point is moot IMHO.

        One of the primary reasons for having some form of encapsulation for an application is for resource control. But OSs have these features, in things like Linux Cgroups, BSD Jails, Solaris Containers and IBM WPARs and Workload Management (WLM). But before all of these are two things that were in AT&T (Bell Labs.) UNIX in the 1970s and 80s, being chroot and then the Fair Share Scheduler which has almost disappeared from memory, but is mentioned in the "Design of the UNIX Operating System" by Maurice J. Bach (first published in 1986), which I had contact with in the late 1980's. None of this is really new.

        With these in mind, I feel that a valid deployment should still be multiple applications running on single OS instances with the correctly configured OS isolation (and some High Availability configuration), but I agree in this day and age, the resources owned by the OS are almost negligible and are easily dwarfed by most of the applications themselves, and the runtime isolation of separate OS images do actually provide some maintainability advantages for monolithic applications.

        I look after the OS and hardware that a set of large Oracle databases run on. These systems have ~400GB of memory allocated to each of them, together with 20 Power8 processors, and the OS requirement is less than 8GB, or about 5% of the memory resource of each system.

  2. katrinab Silver badge
    Paris Hilton

    I would say there is a winner here:

    In order for containers to kill VMs, it would need a *lot* more than 50% of the vote in favour.

    If 51% migrated all their workloads over to containers, and 49% continued to use VMs, VMs would survive, and therefore containers would not "kill" them.

    I think for example a lot more than 50% of people use Linux for their webservers, but FreeBSD and Windows are still very much alive, unfortunately in the latter case.

    I don't know if Netcraft has numbers for operating systems used. But it reports my Exchange Server owa as being run by IIS on FreeBSD, which obviously is an impossible combination. The incoming request is handled by HAProxy on FreeBSD, and it forwards it to a server running IIS on Windows 2019.

    The relevant servers are of course running on vms, not containers. Containers would not be able handle a mix of FreeBSD, Linux, and Windows on the same hardware.

    1. karlkarl Silver badge

      Most workloads on FreeBSD are run in Jails. So that is very much "containers". Albeit a full system container. Heck even the older Solaris Zones or AIX LPARs have always been with us as containers, it certainly isn't new tech.

      Also, the vote was "50% would believe containers would kill VMs". This is not the same thing as "50% are running VMs". This is already factored in. It is suggesting that 50% of people have seen such an uptake of containers vs VMs that they believe VMs to be an obsolete technology. Possibly they have seen 5% VM vs 95% containers for example.

      1. Peter Gathercole Silver badge

        @karlkarl

        IBM LPARs are not containers, at least on Power. They are full blown OS instances, so are Virtual Machines, not containers.

        On System Z, there may be slightly more debate, because of the way that linux (in particular) instances effectively can have shared OS code (similar to read only shared text segments for UNIX processes) for different instances of the same Linux VM.

        The AIX feature most like BSD Jails is WPARs, especially when used with WLM.

        1. karlkarl Silver badge

          Re: @karlkarl

          Thanks for the correction. I realized after I already ran out of time to update the post and... kind of hoped that no-one would notice!

          I probably should have realized I wouldn't get away with it for long ;)

  3. thames

    There will be a continum of technologies, not a single answer.

    I suspect there will be continuing need for both VMs and containers. Each has its pros and cons in different areas of application.

    There is also what I would consider a third category, which is whatever things like Snap packages would be. They also incorporate an application's dependencies into a single bundle which is managed and updated as a package, and which offers some degree of isolation from the rest of the system. On the negative side they tend to be noticably slower to start up than native packages because they carry their own dependencies inside them.

    So there would seem to be a continuum of native packages (e.g. debs), Snaps, containers, and VMs, each with its own advantages and disadvantages. Most people dealing with IT systems are going to need to understand all of them, because the world is getting more complicated, not simpler.

    1. katrinab Silver badge
      Alert

      Re: There will be a continum of technologies, not a single answer.

      The future of desktop apps looks to be electron aboninations running in an instance of Chromium, which has way more overhead than an operating system.

      1. thames

        Re: There will be a continum of technologies, not a single answer.

        Electron is solving a different problem, which is mainly the lack of good cross-platform GUIs. I suspect that Electron will fade away when better solutions come along.

  4. Throatwarbler Mangrove Silver badge
    Devil

    Packaging

    Ah, yes, I remember the good old days of running "./configure; make; make install" and just having it turn out all right with no unresolved dependencies whatsoever, especially not issues with the developer having statically linked to one particular version of a library in another package and thus having the software fail to build. Nope, that certainly never happened. Likewise, I have never struggled with RPMs that fail to install because they depend on a particular micro-version of an obscure package and adamantly refuse to accept that version 1.3.11511516-33 is just as valid as 1.3.11511516-32. Not saying that containers solve all those problems, but they do put more power in the developers' or package-builders hands to ensure that dependencies are resolved before deployment.

    1. Peter Gathercole Silver badge

      Re: Packaging

      The current thinking is that Flatpack or Snaps should solve this problem, but the extra overhead of these things make me shudder.

  5. Michael Wojcik Silver badge

    15 years?

    VMs have been through the wringer for nearly 15 years

    You're off by four decades. CP-40 was doing VMs in 1965.

    VMs and containers are both well-established technologies. Some kids might be discovering them for the first time, but they've both been around for many, many years.

    Believing containers will "kill" VMs is a bit like believing Java will kill COBOL, or Python will kill Fortran. Or that pickup trucks will kill hatchbacks. Or airplanes will kill ships. It's a demonstration of gross ignorance (accidental or willing) of IT history and the durability of established technology.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like