back to article Now listen, Gartner – virtualisation and containers ARE different

Gartner recently released its Magic Quadrant for x86 Server Virtualisation Infrastructure. In it, the mega analyst lumps together hypervisor-based virtualisation and containers. This is wrong, and as I've discussed before virtualisation and containerisation are different. Even if you consider all the differentiators in the …

  1. Nate Amsden

    containers are the future

    Is quite a stretch there, stretch.

    Containers have their use cases, and "services oriented architecture" has been around for a very VERY long time (my first exposure to it was 2007 but I'm sure it was around much earlier than that). Containers have been around for a long time too(12-15+ years? on some platform(s) anyway)

    When(or if) containers can provide the same level of mobility that VMs have then they will be pretty set to take on VMs. Until that time their deployment will probably be limited to larger scale setups (in the same sense that software defined networking is limited to those setups too).

    I do use containers myself, currently I have 9 containers deployed(LXC on Ubuntu 12.04 LTS on physical hardware), along side roughly 600 VMs. The containers are there for a very specific use case and they serve that purpose well. Six of the containers have been running continuously for over a year at this point (e.g. we don't do "rapid deploy and delete"), the other 3 containers are only one month old and haven't seen production use yet.

    Containers should be on that hype bandwagon that el reg covered in another article today from gartner, because they are mostly hype. They aren't magical. They aren't revolutionary. They aren't even NEW.

    1. Matt Bryant Silver badge
      Go

      Re: Nate Amsden Re: containers are the future

      ".....Containers have their use cases...." Definitely, as in "what is the cheap way to stack existing hard systems?" They cannot replace hypervisored VMs for one very simple reason - containers share far too much of the underlying OS to ensure one container misbehaving cannot affect other containers. Sure, they claim they can by portioning out disk I/O, RAM, CPU time, etc., but VMs using individual OS instances are therefore more resilient and will always be more attractive for high-availability solutions. Pottie mentioned Slowaris Constrainers (sic), maybe he should read the following Oracle best practices for RAC as it points out very early on that real HA and containers means dumping all the stacking advantages of containers; "....Provision one Oracle Solaris Container per node or server....." - http://www.oracle.com/technetwork/articles/systems-hardware-architecture/deploying-rac-in-containers-168438.pdf

      1. thames

        Re: Nate Amsden containers are the future

        @Matt Bryant - I don't think you understand what Trevor wrote. The trend towards containers on Linux isn't just to stuff existing things into containers. It's part of a change in design towards using "microservices". Redundancy and reliability are supposed to be designed into the application system from the ground up, rather than something which is tacked onto the outside afterwards by an administrator.

        As for whatever limitations you may think Solaris containers may have, that's not really relevant to this discussion because nobody is talking about Solaris in this discussion. There are containers and there are containers, and not all of them work the same. As Trevor mentioned, Virtuozzo has been around for years, and the generic Linux name for it is OpenVz which has been the mainstay of cheap web hosting for a very long time. However, the recent improvements in Linux have extended the container capabilities far beyond those earlier efforts. It's not one change or one thing, but rather the sum of many different improvements.

        With regards to how containers compare to traditional VMs, because the OS kernel has a better idea of what the application is doing rather than just a very opaque "black-box" view of a traditional VM, it can constrain the application's behaviour and enforce policy on it much better. Whether or not Solaris can do this is as I said, irrelevant.

        1. Nate Amsden

          Re: Nate Amsden containers are the future

          I know you get it, but hopefully management types don't get confused and think they need containers to deploy micro services, when micro services deploy just fine in VMs.

          My first round of micro services 8 years ago were on physical hardware(in production), each micro service ran it's own instance of apache on a different port(Ruby on rails at the time, ugh, hello hype bandwagon again) and talked through the load balancers to the other services.

          I'll elaborate a bit on my usage of containers since it may be non standard and perhaps the information could help someone. A long time ago in a galaxy not too far away our software deployment model to production was deemed to have two "farms" of servers, so we would deploy to one "farm" and switch over to it. In the early days we were in a public cloud so the concept is we "build" the farm, deploy to it, switch to it and destroy the other farm. Reality didn't happen that way and both farms just stayed up all the time. In the time when we may need more capacity we activate both farms.

          After six months or so we migrated to our own hosted infrastructure, where the cost of running two farms wasn't a big deal, because the inactive farm isn't consuming resources it really doesn't cost much of anything to maintain(grand scheme of things).

          Our main e-commerce platform is a commercial product we license and it is licensed based on the number of servers you have (regardless of VM or physical or container, or number of CPUs or sockets etc). One server = 1 license. This application is CPU hungry. The license costs are not too cheap ($15k/server/year). For a while we ran this application in production on VMware, this worked fine though it wanted ever increasing amounts of CPU..

          In order to scale more cost effectively I decided early last year to switch to physical servers, but I wanted to have the same ability of having two "farms" to switch back and forth. Originally I thought of just having one OS and two sets of directories, but configuration was much more complicated and would be different from every other environment we have. Another option was use a hypervisor (with only two VMs on the host). That seemed kind of wasteful. Then the idea of containers hit me.. it turned out to be a great solution.

          The containers by themselves have complete access to the underlying hardware, all the CPU, and memory (though I do have LXC memory limits in place, CPU was more of a concern). Only one container is active at any given time and has full access to the underlying CPU. If a host goes down that is OK, there are two other hosts (and only one host of the 3 is required for operation). Saved a lot by not licensing vSphere(little point with basically 1 container or VM active at any given time), saved complexity in not using any other hypervisor nobody in the company has experience with, and it's pretty simple. The new hosts I calculated had 400% more CPU horsepower than our existing VMs configuration(with both "farms" active). Today these physical servers typically run at under 5% cpu(highest I have seen is 25% outside of application malfunction which I saw 100% on a couple occasions). I don't mind "wasting" CPU resources on them because the architecture deployed paid for itself pretty quickly in saved licensing costs, and allows enormous capacity to burst into if required.

          I don't care about mobility for this specific app because it is just a web server. I wouldn't put a database server, or memcache server etc on these container hosts.

          Headaches I find with LXC on Ubuntu 12.04 (not sure if other implementations are better) include:

          - Not being able to see accurate CPU usage for a container (all I can see is host CPU usage)

          - Not getting accurate memory info in the container (container shows host memory regardless of container limits)

          - Process list is really complicated on the host (e.g. multiple postfix processes, lots of apache processes, default tools don't say what is a container or what is local on the "main" host OS)

          - autofs for NFS does not work in a container (kernel issue) - this one is really annoying

          - unable to have multiple routing tables on the container host without perhaps incredibly complex kernel routing rules (e..g container1 lives in VLAN 1, container 2 lives in VLAN2, different IP space, different gateway - when I looked into this last year it did not seem feasible)

          I believe all of the above are kernel-level issues, but I could be wrong.

          All of those are deal breakers for me for any larger scale container deployment. I can target very specific applications, but in general the container hosts are too limiting in functionality to make them suitable for replacing the VMware systems.

          Obviously things like vmotion and things are a requirement for larger scale usage as well, while most of our production applications are fully redundant, I also have about 300 VMs for pre production environments, most of which are single points of failure(because not many people need redundancy in DEV and QA - our immediate pre production environment is fully redundant, well at least to the extent that production is), and it would be difficult to co-ordinate downtime to do simple things like host maintenance across 30-50+ systems on a given host.

          1. thames

            Re: Nate Amsden containers are the future

            @Nate Amsden - If you are using Ubuntu 12.04 that is well behind the current state of the art in Linux containers. 14.04 might address some of your technical issues. The reason that so much is happening with containers now rather than a couple of years ago is because of the major new kernel features which have been added to support containers better.

            I'll also add that although it may not be popular to say this, SystemD has had features added to it to provide better support for containers. I think you would have to go to the very latest (non-LTS) version of Ubuntu to get that though. (Not that I'm suggesting you ought to do that).

            It's early days for containers. I wouldn't necessarily rush out this minute and replace existing applications with containerized ones unless I had a specific need (as you did). I would however definitely be spending time learning about them and tinkering with them because they will be a big deal in the next couple of years.

            "hopefully management types don't get confused and think they need containers to deploy micro services, when micro services deploy just fine in VMs."

            Well, the big cloud providers are deploying containers inside VMs. The two are sometimes complementary rather than competing.

            Different people are doing different things with containers, but the popular concept now is that we should be building complex applications by plugging pre-made containers together like building blocks rather than hand crafting everything from scratch. It's not the solution to everything, but it does simplify deployment for a lot of common cases.

            By the way, congratulations on finding a creative solution to your problem. It's good to read about your experience in detail. A lot of people like to sit back and complain, while only a few go out and find solutions.

        2. Matt Bryant Silver badge
          Facepalm

          Re: thames Re: Nate Amsden containers are the future

          "..... It's part of a change in design towards using "microservices"....." <Yawn> Ever heard of mainframes? And containers in general is just a re-rehash of resource management software that's been in proprietary UNIX for over a decade.

          ".....Redundancy and reliability are supposed to be designed into the application system from the ground up, rather than something which is tacked onto the outside afterwards by an administrator...." Which sounds like exactly the schpiel the VMware reps used to spout, and how many flaky VMware implementations are there in the Real World?

          ".....As for whatever limitations you may think Solaris containers may have, that's not really relevant to this discussion because nobody is talking about Solaris in this discussion...." Slowaris Constrainers was mentioned in the article - duh!

          Apologies if you have fallen for the hype (or maybe you're just peddling the hype) but there is really nothing new in containers at all.

      2. Anonymous Coward
        Anonymous Coward

        Re: Nate Amsden containers are the future

        real HA and containers means dumping all the stacking advantages of containers; "....Provision one Oracle Solaris Container per node or server...

        The "C" in RAC is "cluster". You can create an HA RAC installation by clustering multiple physical servers, or you can create a virtual RAC installation by clustering multiple containers, one per server.

        You can also, of course, create multiple RAC installations by creating multiple containers on each server. The "one Oracle Solaris Container per node" comment you misunderstood above is per RAC instance, not an absolute limit.

        Maybe if you actually learned anough about Solaris to be competent, instead of just trying to be clever with snide mispellings, you might actually have a clue?

        1. Matt Bryant Silver badge
          FAIL

          Re: AC Re: Nate Amsden containers are the future

          "....The "C" in RAC is "cluster"...." No shit Sherlock!"

          ".....You can also, of course, create multiple RAC installations by creating multiple containers on each server...." Not if you want to get resilience. You may want to go do a LOT more reading on HA.

          ".....Maybe if you actually learned anough (sic) about Solaris to be competent...." LOL at the aggrieved Sunshiner! Bit late for you to be whining, the horse has not just long since bolted but died a death.

  2. thames

    If there was any company whose technology could be used to attempt to invalidate my distinction between containers and hypervisors it would be Odin's. They offer the ability to run more legacy style applications in containers – and with hypervisor-like features – than anyone else."

    Ubuntu's LXD is also intended to bridge the gap between heavyweight hypervisors and ultra-lightweight containers like Docker. You are supposed to be able to stuff existing applications into Ubuntu LXD without having to re-architect them or to use them for the newer "micro-service" style architecture.

    I do agree though that Gartner has got containers all wrong by lumping them in together with hypervisors. However, I think they are trying to pound square pegs into round holes because their "magic quadrant" based sales pitch requires them to include the names of large companies that dominate legacy markets. Any magic quadrant that doesn't include companies that their customers are already signed up with isn't going to meet customer acceptance, since many of those customers are really just looking for justification for past decisions. With containers, the market is dominated by small companies and nobody knows which of them really has the right take on the technology. The large legacy vendors such as Microsoft and VMWare are left trailing behind and trying to play catch-up.

    1. Nate Amsden

      Feels strange to admit this, but I think I agree with gartner (again, this is pretty rare for me anyway). At the end of the day, container or VM it really doesn't matter, it is a minor technical difference (speaking to the "business" level people gartner talks to).

      Techie folks like us care more about the details but from a higher level perspective the concept is very similar of having multiple "instances", on shared hardware. Even if containers aren't as isolated as VMs, the end result is pretty similar.

      At the end I think that's all that really matters.

      1. thames

        @Nate Amsden - "At the end of the day, container or VM it really doesn't matter, it is a minor technical difference"

        It does matter if you are using both but for different purposes. Some people are even using containers inside VMs.

        Furthermore, what for example Ubuntu is doing with LXD (that's not a typo for LXC by the way) is different than what Docker is doing. In the end yes they both deploy applications, but their ideas of how to go about it are different.

        LXD is more like a direct competitor for VMs. They even call it a "container hypervisor". It is intended to run complex workloads inside a single container. You would use it more like a VM, but with less overhead.

        With Docker on the other hand, each container is supposed to do only one very simple thing in a single process, and you run multiple Docker containers to support a single application. If something was written to be a Docker application, then it has to run in a container because that is how Docker works.

        Lumping LXD and Docker together, let alone a traditional VM like KVM, doesn't really make sense. They're not interchangeable.

      2. Probie

        I am with Nate on this, but it does depend on your view.

        If the goal is to run multiple application services (be they a micro service or a monolithic service or anywhere in between) on a piece of tin (aka server) then I really do not see the difference. The hyper-visor is a thicker wedge and overhead on a server to compartmentalize workload (hopefully in a secure/isolated sandbox), the container is a thinner wedge with a smaller amount of overhead to compartmentalize workload (hopefully in a secure/isolated sandbox). The principals do not change, just the amount of overhead.

        I fail to see the fuss here. Gartner's magic quadrants are about markets not method of technology deployment, and without doubt Containers and Virtualization will compete in the some of the same markets.

  3. Lars Silver badge
    Thumb Down

    Magic Quadrant starts to lose its sparkle

    As far as I am concerned that sparkle was lost many many years ago. Surprising people still pay for it as we all know it's more about money than what it claims to represent.

  4. Anonymous Coward
    Anonymous Coward

    Expectations

    If you think GG's role is to objectively present applied-science issues, you really don't understand GG.

    GG's role is two-fold: give end-user subscribers 'air-cover' for whatever decisions they want to make, and pimp vendor subscribers to justify decisions referencing them. That's how they make their money. The percentage of GG subscribers paying for 'objective applied-science' is not large enough to sustain GG's business (or even a lemonade stand).

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like