Reply to post: @brian re. VMs

Will containers kill VMs? There are no winners in this debate

Peter Gathercole Silver badge

@brian re. VMs

Your view of hardware VMs is very, very naieve.

Almost none of the VM systems available today, including IBM's PowerVM and their mainframe VM, and certainly not VMware and derived products are actually hardware.

There were once some real type 1 Hypervisors around (I used Amdahl's MDF about 35 years ago, and I'm not even sure about that), but now, at some layer or other, all hypervisors are type 2, with maybe the hypervisor running on the bare metal, but still very frequently a specialized derivative of a general purpose OS.

I've seen some of the internals of the PowerVM (the IBM hypervisor for IBM PowerPC systems), and inside the hypervisor, you have a full blown Linux kernel with much of the standard tool chain running as as a black-box turnkey system. I agree that this can be fairly hands-off (but must get involved with sub-CPU VM scheduling), and sets up and uses the hardware features of the platform, like CPU security rings and affinity to VM images, memory encryption, above OS level memory page table control, and many more that I'm not going to enumerate, but it's not a hardware hypervisor.

The control of all of these feature is by a software layer, and it's security is only as good as the security of that layer.

I've actually commented on this before. Putting Hypervisors in the stack just moves the stack down. The hypervisor replaces an OS and schedules multiple OS images as if they were applications, and everything else moves down a tier. Containers on VMs just adds yet another tier.

Moving to containers, and particular Docker. This has the concept of the Docker Engine. The Docker Engine in it's simplest form is a VM system or possibly you could call it an OS Binary Interface (can I coin OSBI as a term?), This is what allows you to run applications from other OSes than the hosting OS (as opposed to, say Solaris Containers, where the application still has to match the hosting OS). I admit that it is stripped down to it's barest minimum, with the required support layered on top of this minimum, and may not actually be derived from the OS environment that it provides, but it's purpose is to allow an application to run in an apparent OS environment, with some separation from other containers. This sounds so similar to a VM to me that the point is moot IMHO.

One of the primary reasons for having some form of encapsulation for an application is for resource control. But OSs have these features, in things like Linux Cgroups, BSD Jails, Solaris Containers and IBM WPARs and Workload Management (WLM). But before all of these are two things that were in AT&T (Bell Labs.) UNIX in the 1970s and 80s, being chroot and then the Fair Share Scheduler which has almost disappeared from memory, but is mentioned in the "Design of the UNIX Operating System" by Maurice J. Bach (first published in 1986), which I had contact with in the late 1980's. None of this is really new.

With these in mind, I feel that a valid deployment should still be multiple applications running on single OS instances with the correctly configured OS isolation (and some High Availability configuration), but I agree in this day and age, the resources owned by the OS are almost negligible and are easily dwarfed by most of the applications themselves, and the runtime isolation of separate OS images do actually provide some maintainability advantages for monolithic applications.

I look after the OS and hardware that a set of large Oracle databases run on. These systems have ~400GB of memory allocated to each of them, together with 20 Power8 processors, and the OS requirement is less than 8GB, or about 5% of the memory resource of each system.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon