back to article Hypervisor indecisive? Today's contenders from yesterday's Hipsters

The origins of the hypervisor can be traced back to IBM’s mainframe systems. Big Blue implemented something approximating a virtualisation platform as an experimental system in the mid-sixties but it wasn’t until 1985 that the idea of the logical partition (or LPAR) on the pSeries and zSeries delivered something recognisable as …

  1. Nate Amsden

    why spend time

    "why spend time, energy and money virtualising more than you have to?"

    Because in many cases using containers would require much more time, and energy(as in human energy, and thus money) to manage than virtualization.

    I have 6 containers deployed at my company for a *very* specific purpose(3 hosts w/2 containers per host each container runs identical workload). They work well. They were built about a year ago, and haven't been touched much since. I have thought about broadening that deployment a bit more this year, not sure yet though. I use basic LXC on top of Ubuntu 12.04, no Docker around these parts. Adapted my existing provisioning system that we use for VMs (which can work with physical hosts too) to support containers.

    Containers are nice but have a lot of limitations(lack of flexibility). They make a lot of sense if you are deploying stuff out wide, lots of similar or the same systems, in similar or same configuration (especially network wise). Also most useful if you are working in a 'built to fail' environment, since you are probably not running your containers on a SAN, and unless things have changed containers don't offer live migration of any sort. So if I need to upgrade the firmware or kernel on the host the containers on the host all require downtime.

    So for me, 6 containers, and around 540 VMs in VMware.

    I've had one vmware ESX host fail in the past 9 years(my history of using ESX, used other vmware products too of course - in this case it was a failing voltage regulator), Nagios went nuts after a couple of minutes, I walked to my desk and by that time vmware had moved the VMs to other hosts in the cluster and restarted them(had to do some manual recovery for a few of the apps). I don't think that happens with Docker does it? (you don't have to answer that question)

    1. Anonymous Coward
      Anonymous Coward

      Re: why spend time

      Anon because I'm at work.

      OpenVZ containers live migrate just fine but you need your own management tools for HA/failover. I run OpenVZ containers and KVM VMs on the same nodes actually, using Proxmox, and both play nice with many flavors of shared storage. That said, I've ended up with a similar configuration where it's over 90% VMs.

      Even though my containers support live migration, can run from the same shared storage, are measurably faster, and support snapshots like real VMs, full fat virtual machines are just less hassle. Particularly when it comes to networking. Also getting kernel mods or weird file system configurations to work is universally more trouble than it's worth. I like having both but realistically if you've got VMs only and that's working out, you aren't missing much (unless you're absolutely starving for compute, then containers are the only sensible choice).

    2. Anonymous Coward
      Anonymous Coward

      Bare metal vs hosted hypervisors

      Nice article, but please don't keep perpetuating the myth of "bare metal" vs. hosted hypervisors (i.e. Types 1 and 2)... the difference is basically how many user-space processes you allow to run in the host OS, and little more. All so-called bare-metal hypervisors, VMware included, are just normal OS that have been stripped off unnecessary stuff.

      Maybe this is something that would be difficult to do yourself with a closed OS like Windows, but at least on Linux the line between Type 1 and 2 is very blurry, as it depends on how you configure the kernel and services on your host OS.

  2. fiddley

    That's fundamentally wrong about Hyper-V. It's still a bare metal hypervisor with a management VM installed atop that.

  3. noominy.noom

    iSeries was early also

    Don't recall the pSeries having LPARs in the late 90s. The iSeries did. They might have been AS400s still at the time. I've always liked OS400 but I don't remember all of the timelines. My last pSeries was in 2008 but I still have a couple of iSeries, along with a small cluster of ESX servers. I don't have any Hyper-V though I do have several dozen Windows servers (mostly virtual.) I keep saying I'm going to give Hyper-V a whirl but I never seem to get to it.

    In the early aughts I also had some Sun systems and looked at containers (more virtual machines than containers in the Docker sense.) I didn't make a case for using them in production so didn't get a good feel for them.

  4. klaxhu

    you two

    the two of you still live somewhere 10 years back in time.

    normal people are either 100% virtualised or have 100% as a target for the next year

  5. Bamboozled

    AIX WPARs

    IBM AIX had WPARs around for a quite few years too, the IBM version of Docker.

    1. Matt 21

      ICL VME

      I'm not certain of the precise timeline but ICLs VME grew out of something else which started in the 60s, I think. I don't know who got their first but it looked pretty close between IBM and what became ICL.

      1. Michael Wojcik Silver badge

        Re: ICL VME

        I don't know who got their [sic] first but it looked pretty close between IBM and what became ICL.

        It was IBM, at least for commercial availability. VM/370 came out in August 1972; the first ICL VME release was in '74.

        As lab projects, they're sufficiently contemporaneous that we can consider them simultaneous independent inventions.

        1. Matt 21

          Re: ICL VME

          Thanks for the info and for spotting the incorrect use of their!

          I quite liked VME at the time. CAFS was also an interesting innovation for its time.

          1. Anonymous Coward
            Anonymous Coward

            Re: ICL VME

            CAFS was way ahead of its time! I moved away from ICL kit many years ago, is it still being used?

  6. guyr

    IBM VM/CMS

    Not clear on the history lesson. IBM had VM/CMS (that first part stands for ... you guessed it ... Virtual Machine) back in 1972, way before 1985. And yes, that was a true VM; each user appeared to have his/her own complete system, fully separated from all other users.

    1. Michael Wojcik Silver badge

      Re: IBM VM/CMS

      VM was true virtualization, but it was a full OS that provided virtual machines that CMS and guest OSes could run in, rather than a hypervisor. Arguably IBM had hypervisor technology as far back as the mid-60s with CP-40, and even offered it in commercial products (e.g. VMF). But VM/CMS wasn't a hypervisor, strictly speaking - it was a host OS that provided virtual machines. Hypervisors aren't full OSes.

      Also, a bit of research has reminded me that CP-67 was commercially released, in a fashion (as an unsupported open-source OS) in the late '60s. The first version of CMS (then the Cambridge Monitor System1, later Conversational Monitor System) ran on CP-67. CP-67 became a project known as CP/370, which was what was released as VM/370 with VM-CMS - a combination generally referred to as VM/CMS. So IBM definitely beats ICL for the "first virtualizing OS" crown.

      CMS is basically the shell for VM.

      I'm not knocking VM, mind; I have fond memories of using VM/CMS, and VM did a terrific job of virtualizing the system and hosting guest OSes. Basically everyone with more than the smallest IBM mainframe workloads ran it. Even the small software startup I used to work for ran it, so that we could run MVS and VSE to support different product platforms.

      1Named for Cambridge, Massachusetts, site of IBM's Cambridge Scientific Center. In the late '80s I worked for another IBM group in the same building as the CSC, though that wasn't the building they were in when CMS was invented, and met some of those CSC folks.

      1. Big Ed

        Re: IBM VM/CMS

        A bit of nuance... VM/CMS consisted of two components. CP, the control program, and CMS; they were seperate components. Guest VMs were defined to CP and could run virtually any MF OS of the day or CMS.

        End users were given CMS accounts, the virtual desktop of the day primarily for text editing, email, statistical analysis and early data warehouse analysis along with a smattering of light-weight applications. Operations departments and developers would run production and test MF OSes in their VMs.

        CP itself without a guest (MF OS or CMS) could not do anything useful, and I would argue that it truly was a hypervisor.

        And I would also argue that CMS was an early predecessor to today's VDI desktops and apps.

        The VM/CMS environment was an early experiment in open source. All of the code was distributed in source MF Assembler which was modified and hooked by many. There was a rich community in the day and a lot of contributions were made to the base; the email system for example was written by programmers from Standard Oil. A lot of early client/server concepts between VMs were developed too; although at the time we hadn't quite figured out asyncronous persistant queues.

        As CP evolved, the code was moved to "microcode" that became the basis for LPARS on the z-series, p-series, and i-series.

        VM/CMS died with the proliferation of the IBM PC along with IBM's move to Object Code Only. The OCO move stiffled innovation as IBM would only provide exits for reasons it deemed worthy.

        I feel blessed to have lived so long to see computing history repeat itself. And the mistakes of the day with IBM's OCO policy reborn in the form of active open source communities.

  7. BinkyTheMagicPaperclip Silver badge

    This article is fundamentally wrong

    Hyper V is a type 1 (bare metal) hypervisor, with cut down Windows derived components (not full fat Windows) on top of it.

    Xen, and also Xen server, is a type 1 hypervisor. It uses a paravirtualised OS (designed to be Xen aware) as a domain 0. Dom0 manages access to devices (technically you can run devices off a stub domain, but that's adding complexity), it may also run Qemu which provides emulated devices only - not CPU emulation. The Dom0 can be Linux, NetBSD, or Solaris. For Xenserver, dom0 is a version of Linux.

    Why would you use NetBSD as a dom0? Well, Xen is GPL2, but NetBSD is BSD licensed and there's been a fair bit of work performed to create custom embedded NetBSD kernels for specific purposes. Just be aware if you're using PCI passthrough that there's a fair few Linuxisms you'll have to work around, and that passthrough of graphics cards is currently non functional.

    There are then DomUs (guest domains) that range from fully hardware virtualised (potentially Xen unaware), to fully paravirtualised (completely Xen aware). The drivers used within these domains can then again either be hardware virtualised, paravirtualised, or in-between (can improve performance).

    KVM is a type 2 hypervisor. It runs alongside Linux (and very badly under FreeBSD, with hacks and the Linux compatibility kernel module. Don't bother) as a qemu accelerator. Qemu performs all the instruction translation with optional KVM assistance, it also provides the emulated devices. KVM also comes bundled with vfio on Linux, the most functional PCI passthrough support, particularly in the case of graphics passthrough. These are all separate components. Vfio will work with qemu and without KVM.

    FreeBSD has bhyve, its own type 2 hypervisor which works with the FreeBSD kernel. This is quite new, as of FreeBSD 10.0.

    Jails are something entirely different.

  8. Yugguy

    Ludicrous stupid beard fashion

    I can't wait until this fashion disappears.

    19 year old "men" look stupid with massive beards.

    1. Alan Bourke

      Re: Ludicrous stupid beard fashion

      I'm waiting until they're not trendy, then I'm growing one.

      1. druck Silver badge
        Alert

        Re: Ludicrous stupid beard fashion

        I'm just hoping they wont become compulsory.

  9. Jamie Jones Silver badge

    coming full circle...

    "With the rise of the container as a method of virtualisation I’m reminded of Big Blue’s LPAR system and how logical partitions kick-started the technologies we have come to rely on, and can’t help but think we are coming full circle. ®"

    Just as with thin versus fat clients; cloud versus local storage, software purchase versus rental, and gui versus command shell.

    I can't help but think that changes are pushed by suits and salesmen. When people get fed upwith the failings, things slowly migrate to how they were before, until all is forgotten and someone pushes the new shiny-shiny all over again.

    The mainframe people saw it with the moves to unix; Us unix people saw it with the (attempted) move to windows.

    As the saying goes, those who forget history..

  10. Reg Whitepaper

    XenServer (http://blogs.citrix.com/2013/06/25/xenserver-6-2-is-now-fully-open-source/) is fully Open Source now too as well as having commercial support. So don't discount it thinking you'll have to pay big bucks.

    The live migrations between machines without the need for shared storage is fricking fantastic!

  11. captain_solo

    History?

    Amdahl had MDF in 1988 - forced IBM to market a virtual partition solution since it meant their customers could replace multiple IBM big iron boxes with a single Amdahl one with isolated Domains and MVS didn't know the difference - it was less like VMs today and more like physical/logical domaining. Before that the IBM Virtual server ran in user space. Its interesting how many times the cycle goes around and comes around.

    Same basic technology underpinned the Dynamic Domains in Sun and Fujitsu Sparc RISC servers back in the day.

    Solaris VM server for Sparc gives you zero overhead logical domains - kinda like an IBM LPAR. These are different in some ways from a "VM" like in Xen or VMWare because they are hardware/firmware based hypervisors so they have very low software overhead, less latency getting on CPU, etc. because the instances have dedicated access to either physical CPU/memory addresses (Oracle) or direct access to virtual/timeslice CPU/Memory (IBM) - both very mature products, less flexible that many software VM solutions or Docker/zone/WPar OS Partitions.

    The lightest weight would be the Docker/Solaris Zone/BSD Jail/AIX WPar options.

    I laugh when youngn's talk about virtualization like its new. Better, more widely adopted, more standardized, sure, but its all been done.

  12. Andrew Stubbs

    I believe Joyent deserve some love here too for the work they've already done porting KVM to the Illumos family, and now the work they're currently doing resurrecting and doing the 64 bit thing with LX Branded Zones in SmartOS.

  13. JimProfit

    I'm suprised an article about containers omitted Solaris/Illumos Zones, which is the best container technology. It's mature and secure, Each zone has a full IP stack.. They don't suffer the limitations of Linux containers..

    Joyent even offer Linux-compatible Illumos zones on their SmartOS supervisor..

    They also implemented docker APIs on their IaaS solution. So the datacenter is presented as a large, elastic, Docker host.. They call it Triton

    The best way to run Dockers in production IMHO.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like