Reply to post: Re: Nate Amsden containers are the future

Now listen, Gartner – virtualisation and containers ARE different

Nate Amsden Silver badge

Re: Nate Amsden containers are the future

I know you get it, but hopefully management types don't get confused and think they need containers to deploy micro services, when micro services deploy just fine in VMs.

My first round of micro services 8 years ago were on physical hardware(in production), each micro service ran it's own instance of apache on a different port(Ruby on rails at the time, ugh, hello hype bandwagon again) and talked through the load balancers to the other services.

I'll elaborate a bit on my usage of containers since it may be non standard and perhaps the information could help someone. A long time ago in a galaxy not too far away our software deployment model to production was deemed to have two "farms" of servers, so we would deploy to one "farm" and switch over to it. In the early days we were in a public cloud so the concept is we "build" the farm, deploy to it, switch to it and destroy the other farm. Reality didn't happen that way and both farms just stayed up all the time. In the time when we may need more capacity we activate both farms.

After six months or so we migrated to our own hosted infrastructure, where the cost of running two farms wasn't a big deal, because the inactive farm isn't consuming resources it really doesn't cost much of anything to maintain(grand scheme of things).

Our main e-commerce platform is a commercial product we license and it is licensed based on the number of servers you have (regardless of VM or physical or container, or number of CPUs or sockets etc). One server = 1 license. This application is CPU hungry. The license costs are not too cheap ($15k/server/year). For a while we ran this application in production on VMware, this worked fine though it wanted ever increasing amounts of CPU..

In order to scale more cost effectively I decided early last year to switch to physical servers, but I wanted to have the same ability of having two "farms" to switch back and forth. Originally I thought of just having one OS and two sets of directories, but configuration was much more complicated and would be different from every other environment we have. Another option was use a hypervisor (with only two VMs on the host). That seemed kind of wasteful. Then the idea of containers hit me.. it turned out to be a great solution.

The containers by themselves have complete access to the underlying hardware, all the CPU, and memory (though I do have LXC memory limits in place, CPU was more of a concern). Only one container is active at any given time and has full access to the underlying CPU. If a host goes down that is OK, there are two other hosts (and only one host of the 3 is required for operation). Saved a lot by not licensing vSphere(little point with basically 1 container or VM active at any given time), saved complexity in not using any other hypervisor nobody in the company has experience with, and it's pretty simple. The new hosts I calculated had 400% more CPU horsepower than our existing VMs configuration(with both "farms" active). Today these physical servers typically run at under 5% cpu(highest I have seen is 25% outside of application malfunction which I saw 100% on a couple occasions). I don't mind "wasting" CPU resources on them because the architecture deployed paid for itself pretty quickly in saved licensing costs, and allows enormous capacity to burst into if required.

I don't care about mobility for this specific app because it is just a web server. I wouldn't put a database server, or memcache server etc on these container hosts.

Headaches I find with LXC on Ubuntu 12.04 (not sure if other implementations are better) include:

- Not being able to see accurate CPU usage for a container (all I can see is host CPU usage)

- Not getting accurate memory info in the container (container shows host memory regardless of container limits)

- Process list is really complicated on the host (e.g. multiple postfix processes, lots of apache processes, default tools don't say what is a container or what is local on the "main" host OS)

- autofs for NFS does not work in a container (kernel issue) - this one is really annoying

- unable to have multiple routing tables on the container host without perhaps incredibly complex kernel routing rules (e..g container1 lives in VLAN 1, container 2 lives in VLAN2, different IP space, different gateway - when I looked into this last year it did not seem feasible)

I believe all of the above are kernel-level issues, but I could be wrong.

All of those are deal breakers for me for any larger scale container deployment. I can target very specific applications, but in general the container hosts are too limiting in functionality to make them suitable for replacing the VMware systems.

Obviously things like vmotion and things are a requirement for larger scale usage as well, while most of our production applications are fully redundant, I also have about 300 VMs for pre production environments, most of which are single points of failure(because not many people need redundancy in DEV and QA - our immediate pre production environment is fully redundant, well at least to the extent that production is), and it would be difficult to co-ordinate downtime to do simple things like host maintenance across 30-50+ systems on a given host.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon