What would be terrible for containers is if someone discovered a bug where a process run in one container would allow access to kernel memory belonging to another.
Hang around Twitter long enough and you'll need a double helping of antidepressants to cope with the obvious truth that you are WAY BEHIND ON THE CONTAINER REVOLUTION. Microservices? Everyone else is doing them. Kubernetes? Most CTOs are naming their kids after the popular project. And you? Well, you're still fiddling with VMs, …
Monday 8th January 2018 18:59 GMT Jonathan Schwatrz
"....if someone discovered a bug where a process run in one container would allow access to kernel memory belonging to another." We used to call that the IGAS test - as in the CIO says "I Give A Sh*t" test. In the old days of monolithic application servers, where you hosted one application as one part of a solution stack per server, when UNIX started offering co-hosting (where scheduling controls allowed you to host two apps on the same server without conflicts) we used to apply the IGAS test to see if the company was willing to risk hosting a particular application on a server with another application. For example, the CIO at BT might say "I don't give a sh*t about our website, just keep the billing system up", whereas the CTO from Amazon might say "I give a sh*t about the website, it's my priority". The business model defines the application requirement which defines what the business gives a sh*t about. The risk with moving from the one-app-per-server model was that app A would interfere or hog resources from app B, or that patching to app A would mean downtime for app B, or a security hole in app A would give malcontents access to app B too, and if the business actually gave a sh*t about app B then you would not co-host them (possibly not even on an IBM mainframe). Next came virtualization and the IGAS test was still appropriate, then containers (in UNIX this was looooooooooooooong before Kubernetes was even dreamt of). The IGAS test still applies today - if there is something that needs tweaking or patching in the platform layer (such as Mesos) then you are going to have risk even if you're shifting traffic to another node during patching (especially if you have one OS image spread over all the nodes). If your business depends on an app being up and available 24x365, if they really give a sh*t about it, they may prefer to pay for that app to sit on it's own hardware. Which is why businesses may never go completely containerized, just as they never went completely virtualised.
Tuesday 9th January 2018 01:43 GMT AdamWill
well, yes, but...
...that's also terrible for VMs. you know, the things in which most of the world's really critical applications are currently running.
but don't worry, this is fine. everything's going to be fine. now excuse me while I go take all my money out of the bank and start fortifying my secret bunker in the woods...
Monday 8th January 2018 10:08 GMT Craigie
Monday 8th January 2018 13:20 GMT Sgt_Oddball
I never understand...
Why do some reporters get it so wrong? Why do they insist that the 'enterprise' market is ready and willing to move headlong into a large scale shift of practices/digital estate/hardware just because something's new and shiny?
It's like piloting an oil tanker with springer spaniel on speed. Especially when you're not talking a few dozen, or hundred, or even thousand but maybe even millions of customers. They want, ney DEMAND things be stable and stuff's unreliable enough as it is without changing over to something completely new every 18 months.
... And breathe.
Monday 8th January 2018 13:38 GMT Anonymous Coward
At $JOB, the developers are ready to move everything to containers, the sysadmins are ready, the PHB is on board, and the company is "ready". So we get time to refactor all the applications to suit running in a container?
bzzzt. We can make it better as long as it doesn't take any time, effort or resource.
Monday 8th January 2018 15:33 GMT Anonymous Coward
As long as you're using containers properly and not just because it's some trendy buzzword, then you should be OK.
That's containers being small lightweight application-based thingies, not a 2GB base image repoduction of an actual server sort of thing of which there are quite a few not too far from me...
Monday 8th January 2018 18:15 GMT teknopaul
dont listen to RedHat
They make money from selling full OSes. They deliberately break lxc because it interferes nastily with their licensing model.
I'm sure 90% of RedHat shops have no container strategy and thats just the way RedHat like it.
Also that systemd thing. RedHat's little baby is basically broken by containers.
Im sure the JBoss department will also tell that huge app server running on a massive JVM is just what you ought to running on you full OS too.
Monday 8th January 2018 20:33 GMT Springsmith
Enterprise does the best it can to deliver for its customers
Even as an enthusiast for new technologies I would struggle to bring myself to even try to persuade my "Enterprise" employers that containers with bare metal Node.js and MongoDB would be a suitable replacement for JBoss, Java, and Oracle.
However for small integrated devops style teams I think it would be a different story.
I'm a reasonably experienced Linux/Unix admin and an experienced developer - and I'd have to say my experience of k8s, and fabric8 is that none of it is as easy as it looks for a one-man-band to get it all set up. That is why we have teams! (it is a lot more doable than a RedHat Oracle setup, though!).
So for my money there is probably a sweet spot for Container adoption - certainly at the moment. That sweet-spot might apply to modest teams within SMEs or startups.
An analogy of an "enterprise" as an aircraft carrier, SMEs as destroyers, startups as MTBs and one-man-bands as a kamikaze pilots seem appropriate.
Enterprises do as the best and the brightest within them deem appropriate, and they are usually correct.
Tuesday 9th January 2018 13:32 GMT Anonymous Coward
The community has responded - Bollocks
"Each time Kubernetes has been called out for its shortcomings, the community has responded by improving that particular area."
Like when people want to be able to set a MAC address so that they can containerize their application where it is node locked to MAC address. People have been asking for that since 2016, and have to go to Docker as it has that ability.
Wednesday 10th January 2018 18:44 GMT Peter Quirk
Containers are easier to adopt on Windows 10 desktops
IT may find that it's easier to deploy some popular server apps via docker. For example, I prefer to install containers on my Desktop rather than apps. My laptop has docker images for logstash, minio, CDAP, mongoDB, and golang. They're much easer to manage, update and remove.
Tuesday 28th January 2020 18:11 GMT poolpog