back to article Netflix could pwn 2020s IT security – they need only reach out and take

The container is doomed, killed by serverless. Containers are killing Virtual Machines (VM). Nobody uses bare metal servers. Oh, and tape is dead. These, and other clichés, are available for a limited time, printed on a coffee mug of your choice alongside a complimentary moon-on-a-stick for $24.99. Snark aside, what does the …

  1. Denarius Silver badge
    Happy

    back to the future

    So BEOS had it right then in its application packaging. None of this shared library nonsense, everything in one package in its own directory. BTW what happened to death of COBOL on mugs ?

    1. CrazyOldCatMan Silver badge

      Re: back to the future

      None of this shared library nonsense, everything in one package in its own directory

      Which was (sort of) already done on the Acorn Archimedes - I think that was the first time I used something where what looked like an application was, in fact, a special sort of directory that contained everything that the application needed to run.

      As is (again, sort of) done on MacOS today.

    2. 707kevin

      Re: back to the future

      And it was FAST! And almost worked, mostly, on some of my hardware :)

  2. Anonymous Coward
    Anonymous Coward

    I remember when....

    ....a container was called a process. Then Marketing came to sell stuff.

    1. Ben Tasker Silver badge

      Re: I remember when....

      A container might run as a process on the host, but a process is not necessarily a container.

      Although their popularity has grown recently, containers really aren't that new. Docker was first released in 2013, but the underlying kernel primitives hit Linux in 2006.

      It's really not just a marketing thing.

      That said, although Docker (and other containers) have it's place, it gets abused IMO. Creating a docker image can massively simplify deployment (which is good for eng) but can create an absolute maintenance nightmare for operations.

      It also requires a bit of additional care in terms of managing your build pipeline. Yes, you can send me an image to deploy and I can spin it up. But, where did you build it? If you're hit by a bus, can I build a new image (to integrate patch x), or is it in fact going to turn out you built it on your laptop without documenting what needs to be available for the build?

      That can be an issue without docker, but IME gets experienced less, because you tend to package the software either with the dependencies it needs, or with a well defined list inside an RPM or Deb.

      Personally, I start to twitch whenever someone says "we could use Docker" unless that's followed by a justification of why a container is actually needed. We can trivially spin up cattle with Ansible/Puppet without the need for Docker, so for Docker to make it into the implementation you need to be able to justify it. There are sometimes valid reasons, "It makes it easier for me to include this obscure dependancy" isn't one of those IMO.

      And that's before I start on the issues I have with Docker itself as a project.

      1. Phil O'Sophical Silver badge

        Re: I remember when....

        containers really aren't that new. Docker was first released in 2013, but the underlying kernel primitives hit Linux in 2006.

        And Solaris has had fully-fledged containers (zones) since way back in 2005.

      2. Tom 38 Silver badge

        Re: I remember when....

        We can trivially spin up cattle with Ansible/Puppet without the need for Docker, so for Docker to make it into the implementation you need to be able to justify it.

        It's far better than our older architecture of kvm VMs and CFEngine. Everything that went in CFEngine, even if it was about the structure of the application and code, had to be approved by a sysadmin before the software teams could apply it.

        It's easier to manage and distribute workloads with a well structured k8s/docker/terraform/vagrant/ci setup. For developers, they get more control over how the software is structured - and if it breaks, it's on them not the sysadmin, and can be rolled back trivially. k8s manages the haproxy routing of requests to containers automagically, so there is no manual configuration file changes when we add an extra host, it JFDI. It makes it much simpler to do red-green deployments, or gradual rollout of new features, things that were harder or impossible with the old system.

        If you are just using docker for the hell of it, no, its not a good solution. There's lots to learn and implement, and if you cba to put the effort in to do that, you're not going to get good results. It's not enough to say "Use Docker", there are at least 10+ other parts of the infrastructure to setup and use.

      3. Anonymous Coward
        Anonymous Coward

        @Ben Tasker Re: I remember when....

        Very nice summary on Docker.

  3. Korev Silver badge

    Netflix as a vendor

    The idea of Netflix as an IT vendor is weird, but you could say the same about a certain bookseller....

  4. Anonymous Coward
    Anonymous Coward

    Can we stop claiming nobody uses bare metal servers?

    My senior management have been really confused when I've mentioned our data centre costs because they assumed we had no servers and everything was virtualised. They've heard we are no longer using "bare metal" somewhere and took it literally, whilst obviously the VMs have to run on something..

    1. Anonymous Coward
      Angel

      Re: Can we stop claiming nobody uses bare metal servers?

      No, surely you are now serverless. The whole serverless room is just full of empty racks, heck even the switches are pure software now, no need for pesky cables anymore.

      Just row after row of empty racks delivering everything the consumer (we used to call them users) need.

      We hope to roll out ethereal (TM) computers soon, no need for clumsy things like keyboards, mice, and screens, just run by the power of notional thought (another TM)

      1. Anonymous Coward
        Anonymous Coward

        Re: Can we stop claiming nobody uses bare metal servers?

        you just described Schrödinger's datacentre.

        Is there really anything in it?

        1. Andytug

          Re: Can we stop claiming nobody uses bare metal servers?

          You can't tell until you open the door....

        2. scrubber

          Re: Can we stop claiming nobody uses bare metal servers?

          "Schrödinger's datacentre"

          If a server falls over in the woods, does it make a sound?

      2. ma1010
        Coat

        Re: Can we stop claiming nobody uses bare metal servers?

        We hope to roll out ethereal (TM) computers soon, no need for clumsy things like keyboards, mice, and screens, just run by the power of notional thought (another TM)

        Steve Bong, you're back! I've been wondering where you'd gotten to. Sounds like an absolutely wonderful idea for a new catapult. We'll start organizing the funding right away.

        But rather than "notional thought," wouldn't "Thinkfluence" be a better term?

        -Theresa

  5. This post has been deleted by its author

    1. CrazyOldCatMan Silver badge

      Does that mean that monitoring tools are obsolete for containers, too?

      Yes.

      And no.

      Maybe.

      And, just maybe, by monitoring it, you bring it into being. Therefore, you don't need all that expensive squishy meatbag programmer stuff - just set up the monitoring and, by the power of positive thought, your solution comes into being.

      Whee! These drugs are good!

  6. Joe Harrison

    Finally I have found my niche in IT

    Blundering about and randomly knocking important things offline, yes I will be changing my job title to Chaos Monkey forthwith.

  7. Simon Ward

    Thank you. This:

    "Serverless, meanwhile, is also no big threat to containers. It is a stupid name for the technology in question, which is really something between a batch script and a TSR that you run on a cloud."

    is the best description of 'serverless' I've come across thus far and I full intend to use it the next time I'm interviewed by buzzword-spewing imbeciles from middle damagement.

    Containers? chroot on steriods with a network layer slapped on top.

    What's old is new again - both of these things existed, albeit in a slightly different form, when I was at university in the 90s.

    Bill Hicks was right - if you're in marketing, just do the world a favour and kill yourself.

    1. CrazyOldCatMan Silver badge

      albeit in a slightly different form, when I was at university in the 90s

      And on mainframes from (roughly) the 1960's. To misquote: "any sufficently fast context-switching is indistingushable from multitasking".

  8. dgc03052

    For the Win:

    "Red Hat has most of the required components, but it will probably take them at least a decade to integrate all of it into systemd."

  9. tiggity Silver badge

    MS

    MS are in with a good shout for those that don't want to go containers route & not bothered about platform agnostic solutions, "Standard" apps coded in Windows and used on premises work fine ported to Azure (with a few caveats about what is not allowed on SQL Server on Azure vs "normal" SQL Server) .

  10. CrazyOldCatMan Silver badge

    "TSR that you run on a cloud"

    Presumably installed using the loadhigh command..

  11. Robert Carnegie Silver badge
    Joke

    "integrate all of it into systemd"

    I don't think you meant that sincerely, but you probably meant to provoke a reaction!

    1. Anonymous Coward
      Anonymous Coward

      Re: "integrate all of it into systemd"

      With Trevor, as I've come to learn from our conversations, he really means it. I can say the same about RedHat and systemd.

  12. teknopaul Silver badge

    perhaps it aint that simple

    "automated workload baselining, instrumentation, isolation and incident response"

    Sure thats most of what you want. But not _all_. Logging perhaps?

    Thing about a single process container is you have to have all that stuff outside the container. With a vm you have a ton of tools to fill those roles.

    Sometimes an awk script on a log makes a good quick alarm system.

    Nagios monitors generally require a scripting framework.

    Boot options often need sh.

    Apps are not just processes, gnu tools are quick and easy.

    I dont see anyone hitting the sweetspot between light container and single process. It will be too restrictive. Snaps try. I dont like it.

    Anyone that isolates processes for security but lets all your quick and easy hacks n fixes still work might find a big nieche. E.g the main proc in a container with a full os on the outside instead of just esx.

    The big players probably dont hack at there processes much cos they have so many. Anyone with one, two... four instances of their core app process probably have a lot of tooling around them doing logging, bespoke monitors, a couple of batch jobs that fork a script etc etc.

    Containerised processes with a full os outside could be cool. We've had chroots for ages. And process isolation turns out to be harder than we thought. Ref meltdown etc etc.

    Containers without tools are hard work now, perhaps they always will be?

  13. TechDrone
    Go

    Does that word mean what you think it means?

    I guess it's a sign of age when you see something like TSR in article and you have to follow the link to find the author really did mean what you thought it meant.

    1. Trevor_Pott Gold badge

      Re: Does that word mean what you think it means?

      Damn it, I'm not old!

      I'm just alt-young.

      1. Alistair

        Re: Does that word mean what you think it means?

        I might have chewed through a birthday recently, and might have spent far too many years in IT of various ilk, but I believe in the 'growing up, growing old, and growing have nothing to do with one another' axiom.

        I've QB'ed linux on to the DC floor for my employer, from 2 to over 4K installations. From 'data appliance' to 'blades' to servers, to vms, clusters, storage farms, server farms and DB workhorses, webservers, application servers, integration and file servers.

        "Containers" change a few things. Mostly, however I think the single largest issue with containers is that the expectation of what containers *are* and *can do for us* has a huge amount of variance, from the Dev's who seem to think that it will allow them to shorten development cycles, to operations who think it will remove the need to make sure things work to platform folks who think it will make it possible to keep hardware at 80% to management who are convinced that 90% of the systems could be collapsed on to two DL580s running containers to beancounters who see the Open Source tag and decide that it will be free.

        sadly the entire lot of them are utterly wrong on all fronts. I will point out that, *if* the devs sat down together and decided to work together and work on some common ground the development cycles would get shorter. The Ops folks should sit down and decide what amounts to an acceptable set of limits of performance on the critical apps (Tx/s perhaps? *something fcs* or my continual question - what is too slow?) The platform folks need to decide what level of risk they are willing to assume, and how much of the app performance are they willing to sacrifice to a failure of hardware, and management and the beancounters need to sit down and decide what they are willing to pay for application performance.

        I see a difference in development style between 'dedicated application environment' and 'containers' - i.e. scale up vs scale out - and I see a substantial change in application design between the two.

        The major advantage in my books, is, once you've made the mental, logical and process shifts that would be required to go containers, you *should* have a single pane of glass somewhere that has all your relevant metrics, displayed clearly so that one can see where your business flow is broken. The "change" to containers is not something that will happen across an entire business overnight, but it *may* have a substantial impact on overall business functionality very quickly once started.

    2. Anonymous Coward
      Anonymous Coward

      Re: Does that word mean what you think it means?

      I thought he was on about the last bastion of innovate British military aircraft design, the TSR-2

  14. ryebread157

    Well, all you need to do is just...

    A couple docker containers are pretty easy to understand and manage. Lashing them together with k8s, persistent storage, get devs to learn it, etc is HARD. Red Hat will gladly sell you $12K/year per-server subscriptions for OpenShift and fly in their consulting army to help you, and companies are buying it. But I'm glad to see others, like Docker Inc, are making it easier to use and less expensive. There is a lot of investment going on in this area, and it will be fun to see what comes of it.

  15. SouthernLogic

    Hotel California

    Hotel California Class Agreement!! That was awesome and very accurate with the agreements of M$, AWS, and Google.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2020