
Here to help
That'd be teeming, not teaming.
Containers are becoming the de facto way of spinning up new services and applications. Many are running on cloud servers which themselves are virtual machines running on bare metal, well... somewhere in the world. For many developers, containers are a way to create hermetically sealed application services. But once started, …
So you're suggesting that host servers are monitored? Good idea - I don't think sysadmins ever considered monitoring servers before. That might be a new market!
Docker containers can leave a load of cruft behind; stopped containers that haven't been removed, volumes from removed containers that weren't removed along with the containers and of course container images that aren't being used by any running containers any more.
I think it's worth suggesting that, in the same way that containers should be ephemeral, the cloudy hosts running them ought to be too, so avoid the accumulation of this slough.
If you're on bare metal then your ops people should be putting the normal housekeeping monitors and scripts on the server that they'd run on a 'normal' server.
If your 'devops' doesn't grasp the concept of general server husbandry then I'd rather not buy shares in your slack startup.
Bringing back the total lack of procedures and oversight that the industry has spent decades trying to put in place.
It is marvelous to think that the most disruptive technology to ever hit a production server is the lazy developer who couldn't be arsed to follow procedures because reasons.
Because you cannot make me believe that all those containers have been created by small mom & pop shops who don't know better, hmm ? I don't think so. They've probably been created by companies who have an established protocol for managing servers, but who call in some DevOps evangelist who blinds them with "expertise", dizzys them with buzzwords and proceeds to screw them over with shoddy installs, no documentation and a fat bill.
Exactly.
If you think that cloud or containers or whatever virtual servers are called this week don't need experienced system administrators to manage them then you deserve all the problems you get.
The article basically boils down to : if you don't document your systems and you run them in a casual manner you will end up in chaos. Well, who could have predicted that!
But the hype around cloud/containers couldn't ever admit to the fact that running virtual servers is pretty much the same as, you know, running actual servers which has been going on for decades.
And therefore there are a lot of very skilled, experienced people who you should be looking to in order to find out how you really should be doing this stuff. Some of them have even written very good books on the subject!
You won the interwebs beer today mate.
People who "Love" to deploy containers are these who do not know how to administer a server and who do not take time to learn how to integrate stuff.
It takes me the same time to deploy a container from scratch (without using a 3rd party image hosted by someone else as a base) as to build a complete server, with one big difference, the server is easier to troubleshoot and administer.
All this DevOPS business leads to what I call "Impossible jobs" where you're required to know so many different fashion of the week techniques as to make it impossible to go through an interview.
The only development shops who are using this shit in an effective manner are these who had a decently designed application and good procedures from the beginning. Or they have a server fleet with some decently-thought out orchestration system.
Anyone trying to build a large code-base of anything mission-critical with no proper planning, and churning out code to customers every 5 mins... well, good luck.
This post has been deleted by its author
Here is an idea: garbage collection of containers. Randomly shutdown (do not remove the data, for at least one year) containers which have not been monitored by appropriate application monitoring tool for extended period of time, then wait until someone starts complaining. Of course, some application monitoring needs to be in place, so if your organization is only monitoring the infrastructure but not the applications, that is not enough to instantiate such policy (and also part of the reason why you might not know what-the-hell-is-that-thing-doing)
It is possible. BUT it requires people to do CompSci/Business double majors etc. My youngest is in bioinformatics a similar thing where she speaks Bio and Comp natively as she did a double major in Biochem and CompSci (with some creative timetabling at a NZ university which is very flexible about such things). She is currently doing a PhD in bioinfo while working parttime as a bioinformatiion for the local Ag Research institute who have recognised her fairly unusual native skills base.
You see unlike the Biochem grad who does a postgrad in CompSci, she learned to speak and think in both spheres at the same time and demonstrated the effectiveness of that by graduating well.
The world is full of people who have converted themselves but we need some native bilingual people as well. Yet the universities are VERY slow in designing courses. English universities are very inflexible so there will have to be defined courses. She did ask Stirling Uni here in Scotland but they said the timetabling was impossible. So, she went back to our Alma Mater in the city of her birth, treated as a local for loan etc purposes.
In our time we knew a very clever woman who did a double major in Biochem and Law, just as Biotech was taking off. Last heard of over here in Blighty earning BIG bucks as a specialist corporate lawyer who could talk to the scientists.
Three of my fellow students (one physicist, two chemists) retrained as accountants.
As for Devops, this looks to me like just another buzzword for people who are actually not very good at actual programming. I spent my career wondering how to turn graduates into capable programmers, and never did find an answer.
I'm not sure what everyone thinks devOps is but all the commentards on here seem to constantly slate the concept.
There are many implementations of devOps but the one we have is we are a product team who develop software which is related to our product (retail and PoS in my case) thats the dev part. We also deploy our software and manage the day to day operations. Thats the ops part.
Containers have helped us automate the testing and deployment and because the operations part is within our remit we can also automate much of that but not all. If the operations part of the job was simply thrown over a wall usually to an underpaid and under resourced team we may never see the requirements for automating any of the deployment stuff or any of the operational tasks. If its all within the same team then operational requirements can take as high priority as features and functional issues.
I agree with some of the concerns raised in this article however, active monitoring and management of different environments is important an should not be overlooked as its easy to overlook these tasks.
What it appears to be is inexperienced developers egged-on by equally naïve startup managers acting as if they know better than everyone else and promising they can deliver sooner. Any developer can deliver 'sooner'. The trick is to pretend that concepts such as planning, documentation, robustness, defensive design, (what might be collectively termed 'software engineering') are simply unnecessary.
There is nothing new in the idea that developers end up supporting their applications in the field. It happens all the time, and many developers will have cut their teeth in small firms where there was no other choice. The only thing that is 'new' are the people who seem to think that it's a new concept just because there's a new buzzword to slap on it.
A smart man by the name of Jorge Agustín Nicolás Ruiz de Santayana y Borrás famously wrote "Those who cannot remember the past are condemned to end up with their head on a stick and a red-hot poker up the botty*".
*Sorry, that should read "repeat it".
That's interesting but your entire argument seems to be hinged on the idea that we gain all the knowledge we need from universities.
That is certainly not the case with myself or any of my colleagues as we learn new things on a daily basis, university at best was proof that you can learn but not that you have learnt all you will need to learn.
I do think University education is a good start but lifelong learning is really the aim here.
"...It could be a single line in the Docker script that injects just enough malicious code (or small application) to cause your company a nightmare down the line."
So much depending on so little [lines].
Coding has to be less of a 'dark' art; Code less of a 'wizard codex'; More readable, accesible, auditable... And open to insider' oversight.
I agree with the push back on the comments, yes, housekeeping (aka sysadmins) still exist. The problem is that in the world where containers are being adopted, a lot of the good practice that sysadmins have brought over the years is being discarded.
Now it's Infrastructure As Code. Meaning the computer is just another app that the developer manipulates.
What the author talks about in the article certainly rings true for me. Gone are the days where your "pet" Solaris machine was taken care of carefully. Now we salvo containers into the Wild West under the banner of SRE and DevOps.
I hope that tools around this continue to mature. Not everyone has the operational maturity as Google in this space.
Google might have 'operational maturity' in what they're running, but what services they offer are a whole different story. What they do have is enough cash that they needn't lose a moment's sleep over a decision to arbitrarily cancel a service you built your whole enterprise around.
Well, that's the theory and the marketing approach to reality. (We suspect by now that the DevOps column at El Reg is heavily sponsored, anyway.)
A brief look at the CVE database reveals a number of glaring security issues, including root escalation and totally screwed up permissions. Certainly more of them than should be the case given its short lifetime.
So, for as long as I can concince my clients, systems will be hermetically sealed to keep Docker out, thank you. If it really matures past the hype, I may take another close look at it. Until then, if I need this extra layer for quick deployment and flexibility (plus added security that Docker not currently provides), I'll continue using *mature* FreeBSD jails, cheers.
I've been doing the Ops part of DevOps for 16 years now. I've yet to meet a Dev who could fully describe everything that happens to get content into a browser (I'm talking down the level of knowing local host name resolution order). So my experience of DevOps has been all about no boundaries working rather than Devs suddenly doing Ops.