back to article VMs were a fad fit for the Great Recession. Containers’ time has finally come

Welcome to the latest Register Debate in which writers discuss technology topics, and you – the reader – choose the winning argument. The format is simple: we propose a motion, the arguments for the motion will run this Monday and Wednesday, and the arguments against on Tuesday and Thursday. During the week you can cast your …

COMMENTS

This topic is closed for new posts.
  1. Lunatic Looking For Asylum

    Before long we'll be just running several apps on the server, get rid of the dock overhead as well.

    Fashion repeats.

    1. Anonymous Coward
      Anonymous Coward

      Hmmmmm

      I'm not sure that's accurate to be fair. Containers are a much more versatile solution that offer you the capacity to deal with these matters before they become a real problem

      1. RegGuy1 Silver badge

        Re: Hmmmmm

        And remember ALL containers will use the same kernel. Want a different kernel? You'll have to fire up a different VM (OS).

        But if the kernel gives you all you need for all the apps you want to run then containers are the way to go.

        1. AVee

          Re: Hmmmmm

          If the kernel gives you all you need for the apps you want to run, why don't you just run them?

          Don't get me wrong, there is a time and a place for containers. But an OS can just run multiple applications at the same time out of the box and often that's just all you need.

          1. drtune

            Re: Hmmmmm

            I'd say the fairly minimal runtime overhead of containers are very much worth the convenience of having your components be network and filesystem isolated, and have a simple gui to manage (e.g. portainer), etc, etc. They're _much_ closer to being "lego bricks" than what apt-get can do for you at the best of times.

            That developers can now ship their products in a form that is both well-optimized in footprint (e.g. running on Alpine) yet "batteries included" and practically guaranteed to work out of the box (plus guaranteed to remove without trace), is marvelous

            1. Anonymous Coward
              Anonymous Coward

              Re: Hmmmmm

              > "That developers can now ship their products in a form that is both well-optimized in footprint (e.g. running on Alpine) yet "batteries included" and practically guaranteed to work out of the box (plus guaranteed to remove without trace), is marvelous"

              I see your "practically guaranteed" and raise you the following from the recent Alpine 3.14 release notes ("https://alpinelinux.org/posts/Alpine-3.14.0-released.html"):

              "The faccessat2 syscall has been enabled in musl. This can result in issues on docker hosts with older versions of docker (<20.10.0) and libseccomp (<2.4.4), which blocks this syscall."

              Your containers are still partly at the mercy of the host whereas with VMs there's better isolation.

              Of course the best of both worlds (from a footprint perspective) is to use Alpine as both the host OS and as the container base image :-)

            2. big_D Silver badge

              Re: Hmmmmm

              It is swings and roundabouts. There are arguments both ways. Containers will always have some overhead and if performance tuning is really needed, you will need a dedicated machine for the application. If you are running in a shared cloud environment, the ease of movement between hosts and the added security will be more useful.

              I can see lots of situations where containers are useful and lots where "going native" is still the better option.

              I think we will be doing both for the foreseeable future.

          2. Tim99 Silver badge
            Trollface

            Re: Hmmmmm

            But that assumes competent programmers (old farts who can write stuff with uptimes of months) and systems that allow hardware to run indefinitely with adequate backup/dupication. Get with the (Cloud) trend - "When it falls over - Just restart it" (every couple of hours/days).

            I nearly put a Joke icon, but I didn't >>=======>

          3. big_D Silver badge

            Re: Hmmmmm

            Security is the main argument. If the containers are isolated in their own sandboxes, if one goes rogue, it can't kill the host OS or other applications running in other containers.

            The argument in the "for" article are a bit off though. You still need an operating system in which to run the containers. The container just contains the application code and configuration, it doesn't provide the OS to run the services, it needs to "borrow" the core OS features from the underlying OS. There is no way around that, other than making the container a full VM...

            I think that they have their place and, in the cloud, they are a good option. If you are running your own hardware and VM environment, it makes deployment in some respects easier and removing or replacing a container is easier than deinstalling an application and putting in a new one. Now crud left kicking around that needs to be manually cleared out.

            That said, I've had containers or host environments that were real dogs and running the base application on the OS directly was actually quicker and easier. QNAP being a good example, I used a Unifi container on a QNAP NAS. It installed cleanly and easily, but you couldn't upgrade it, you needed to have the container information to install the update, but that information wasn't available on the QNAP, it was all hidden. The only way around it was to export the config, delete the container, install the updated container and restore the config. Unifi's built in update routines don't work in the container.

            In the end, I put the full Unifi management software on a Raspberry Pi. It works much better than the QNAP implementation of Docker.

            This isn't Docker's fault, the problem is how QNAP implemented it, easy installation out of a store containing Docker containers. But no information on the installation parameters and updates can only be done by hand using the command line and you need the installation parameters that were used by the GUI, which aren't documented and the GUI doesn't allow updates.

            I like the idea of containers, but it still needs to mature and "rogue" environments that have a half-arsed implementation won't do anything to help improve their acceptance. I think we will be having both options (container and manual installation in a normal operating environment) for a long time, going forward.

            There will always be situations, where the extra performance of direct installation will override the simplicity and security provided by a container and vice versa. (E.g. running a time critical application locally, you will install it directly and fine tune the OS and software, running on a shared cloud host, you will want to sacrifice some performance for the added security overhead.)

            1. Adrian 4

              Re: Hmmmmm

              "Security is the main argument. If the containers are isolated in their own sandboxes, if one goes rogue, it can't kill the host OS or other applications running in other containers.

              "

              Like OSs were designed to do for applications.

              1. big_D Silver badge

                Re: Hmmmmm

                Except that applications have full access to the OS, excluding mobile OSes, and they can overwrite files, delete files in the user domain belonging to other applications, with permission escalation, they could break out of their own "zone" and trample all over the OS.

                The container sandboxing is supposed to hinder that, stopping app from damaging the OS or other containers, even if it is poorly configured, hostile or gets taken over by malware.

      2. Michael Wojcik Silver badge

        Re: Hmmmmm

        Containers are an awkward evolution of the line that began with chroot jails into a half-assed solution to the sort of problem that approaches such as "library OSes" solve better.

        As for VMs going away – tell it to zOS. VMs existed long before VMware was a glint in the milkman's eye.

        1. PFer

          Re: Hmmmmm

          Well 2 comments. (And this is from a background of being out of the game this last 8 years - retired. So not au fait with Container technology.)

          1) Doesn’t the final bit of the article sound as if we’ve just reinvented Application Servers from the early 2000s?

          2) I did a lot of work with organisations who had large IT estates and were interested in ‘this new micro services thing’. In general, apart from new builds, they didn’t implement them. There were two difficulties. 1) Just like refactoring, when you cost it, it’s difficult to justify financially unless there’s an over riding business imperative. (We did find one or two.) 2) The scale of the ongoing management complexity became obvious when it became necessary to have a catalogue of micro services. And since complexity = costs + cock ups….

          So I get it with new builds. But like one of the other commentators, I’m less than convinced for what you might call ‘legacy’ or you might call ‘what’s making the money’.

          1. Claptrap314 Silver badge

            Re: Hmmmmm

            One thing to be aware of: if Google's implementation of SRE actually prevails (adding SWEs to the Ops orgs), you get an explosion of capability. While K8s is clearly a first cut, the cost of managing dozens or hundreds of applications in prod can be brought down to levels far below what most people can imagine right now.

            And yeah, keeping VMs around just to run that one legacy app that no one is going to touch is going to be a thing long after I'm gone.

    2. AMBxx Silver badge
      Facepalm

      In any discussion like this, there are two opinions you should always ignore:

      1) We should do everything like this

      2) We should do nothing like this

      As ever, the truth is somewhere in between.

      How about we just choose the right tool for the job?

      1. Halfmad

        honestly because experts are paid to tell us otherwise.

        But yeah, fit the tool to the job, not the other way around.

      2. teknopaul

        One addition, the right toolset for the job.

        Composing applications from smaller parts is the Unix way. Composing systems from smaller services is the microservices way.

        It's odd the microservices lot are at odds with the full toolset approach.

        I see a lot unnecessary code written for the lack of a cron job. You can drop crond in a container fairly easily, and if you can't fix your container build.

  2. James Haley 2

    Asking for a friend..

    Do we have to read the arguments to vote?

    1. Anonymous Coward
      Anonymous Coward

      Re: Asking for a friend..

      Ha ha

    2. Michael Wojcik Silver badge

      Re: Asking for a friend..

      Wait – are you saying reading the arguments first is an option?

  3. Anonymous Coward
    FAIL

    No more managing operating systems and monolithic apps

    Yeah, Good Luck with that.

    News at 11: your datacenter server is not your Android smartphone or iPhone. There are no "apps" on your server. Deal with it.

    But go ahead and make your glossy slide deck promising just about anything. PHB will love it.

    1. Anonymous Coward
      Anonymous Coward

      Re: No more managing operating systems and monolithic apps

      You really didn't invest a lot of time on basic research before you answered, did you?

      1. Anonymous Coward
        FAIL

        Re: No more managing operating systems and monolithic apps

        > You really didn't invest a lot of time on basic research before you answered

        Of course not.

        I've only been doing this - Operating Systems and Compiler development - for 22+ years. What do I know, O Brave Anonymous Coward.

        Teach me, All-Knowing Master.

        The entire premise of TPM's article can be summarized as: Buzzword Soup. Not even the timeline is remotely correct.

        1. This post has been deleted by its author

          1. Anonymous Coward
            Anonymous Coward

            Re: No more managing operating systems and monolithic apps

            > "And containers show us that running a big fat operating system on every compute element is far from efficient."

            What does that even mean? Does he mean that running an OS on a bare-metal compute node is inefficient? Or does he mean that running an OS inside a container that's running on a compute node is inefficient?

            I fail to see how adding a layer cake of indirection - which is what a container really is - would yield better performance than bare-metal. But hey, perception is reality and no-one's counting instruction cycles. It feels faster because we spent a lot of money setting it up and getting it to work.

            > [ ... ] it is far better to stop having an operating system at all.

            Moronic Statement Of The Year. Who and What is going to run your container?

            > The minute every server has a data-processing unit (DPU, aka SmartNIC) that can virtualize security, networking, and storage, a server CPU becomes not much more than an application runtime environment.

            Say What?? The CPU is now application runtime? Does he even understand how Operating Systems actually work?

            Monolithic apps. What's the definition of a monolithic app? Server-side software?

            Is Apache httpd a monolithic app? What about nginx? Or Git? Or MariaDB (née MySQL)? Or HPC applications that run distributed / parallel on thousands of compute nodes using some flavor of MPI and OpenMP, and maybe CUDA? [ Insert mandatory buzzword here: AI/ML. Blockchain? Nope, that's cooked. Yesterday's Hype, move on. ]

            In what way, exactly, is HPC software similar to an "app"? Is Git an "app"?

            Speaking of Containers: Have you tried Docker? If you haven't, you should, if only out of curiosity.

            It is completely inadequate for any conceivable use case that one might attempt to use it for. It sucks for development, and it sucks for production too.

            The only possible use case that I could find for it is demoware. If for some reason said demoware needs some special runtime environment that can't be replicated on some simple bare-metal setup. Which is a red flag in and of itself.

            The entire story reduces to the same old story: the Container Hype Bullshit Industrial Complex over-promised (as usual) and then under-delivered (as usual).

            > Also, the timeline is correct.

            Nope, it's wrong.

            1. AVee

              Re: No more managing operating systems and monolithic apps

              "If for some reason said demoware needs some special runtime environment that can't be replicated on some simple bare-metal setup. Which is a red flag in and of itself."

              This. There's more and more pieces of software popping up that is distributed as a docker image by default (or even only that way). To me that's generally a bad sign as it's suggests running the software is more complicated than just dropping the binaries on a server and launching them. And for most software it really shouldn't be more complicated than that. And if you are building microservices that should be even more true...

              Right now all the software I'm building gets packaged as a debian package by the build server. From there it's trivial to automate deployment any way you like...

              1. Anonymous Coward
                Anonymous Coward

                Re: No more managing operating systems and monolithic apps

                The problem that docker solves is that people have largely forgotten to teach proper package management. Coming from a unixy background, it seems scarcely credible.

                There is an increasing cohort who cannot package into a deb/rpm or make an autotools setup so dependencies can be configured easily on a random system.

                On the otherside, the container has become a playpit to dump all the problems resulting from lack of packaging into a semi standardised format.

                It's crap but, allows a decent amount of tooling atop that to support self service to people, who would never be allowed access in another deployment paradigm.

                It's ironic as the best containers, just install a deb.

                Once you've put all the units of release into a container, then you can to some extent ignore the underlying system.

                A VM especially with para-virtualisation and proper package tuning to work together is better technically. However the gluing together of containers is good enough for a lot of workloads.

                at $JOB we have lots of read kafka crunch and spew into db workloads. These are running in containers hosted in a VM, with k8s. The degree of self service is quite surprising as I'm more used to provisioning infra in a more controlled manner, the cost is crazy but someone else is paying..

                Technically containers are a step backwards, operationally they are a burden. But orchestration has opened up all kinds of middleware and made it easier to integrate, that might not have been possible with VMs.

                Our devs can provison certs, kafka topics, databases, glue a graph of services together with some copy-pasta. It's horrid but it works, at horrific resource and fiscal expense.

          2. big_D Silver badge

            Re: No more managing operating systems and monolithic apps

            TPM's article was a sloppily worded in places. You still need an underlying OS, the container doesn't provide that. It does allow easier sharing of an OS with multiple containers / applications, keeping things nicely separated, but you still have the OS.

            His words would have had more impact, if he had been more careful in the description of the role of the OS and the container, rather than saying there is no OS...

            1. teknopaul

              Re: No more managing operating systems and monolithic apps

              "OS" needs definition.

              By most definitions a container does not need an OS it needs a Linux kernel, nothing else.

              An OS, e. g. Ubuntu includes a lot of other gubbins.

              Lines are blury for what is and is not an OS, but when we talk about application containers or OS containers, typically OS == just all the other other gubbins and specifically not the kernel.

              Seems you look at it more like the Kernel is the OS and anything else is not.

              I don't think there is a right and wrong.

              Containers has messy terminology, if a container is a metal box what do we call the thing inside the box? package? app? os?

              1. big_D Silver badge

                Re: No more managing operating systems and monolithic apps

                An OS is the Kernel + drivers + routines to enable I/O. Everything else Ubuntu (or Windows or macOS, iOS, Android etc.) delivers on top is not part of the OS, it is part of the complete system. Most so-called OSes these days are the OS layer + GUI + a bunch of bundled crapware that has nothing to do with the OS, everything that isn't the OS gets bundled into the description, which is wrong.

                The OS is what lets the container operate. It is still needed.

          3. DougMac

            Re: No more managing operating systems and monolithic apps

            || "And containers show us that running a big fat operating system on every compute element is far from efficient. "

            || I'd argue that this is correct and that expanding from that point - which is really all that containers do - takes us to a much more efficient place in computing.

            With the gigantic amount of resources available in server hardware now-a-days, does it really need to be super efficient? We are far past the days of counting bits and counting cycles like they had to do up until at least the 90's or 00's for every single program they developed. Then OSs/apps became super bloated because the resources were available.

            So, I have a few OS kernels running in VMs on my 750GB of RAM on my server (times 5 or 10, or many more). They are a pittance compared to the databases, and web apps consuming huge amounts of GBs of RAM. Some customers I work with runup 256GB database servers as their norm.

            I'm not worried about saving a few GB of RAM for OSs vs. fitting in a 256GB database.

        2. Anonymous Coward
          Anonymous Coward

          Re: No more managing operating systems and monolithic apps

          Hmm. I did not mean to offend. I am also not entirely sure if we are addressing the same points here.

          But I'll try a different tack:

          I don't have years of compiler experience and IMO, that's not necessary when we're talking about containers.

          The beauty of using containers - and the reason I took issue with your comment, is precisely because skills like yours - which are rare within the industry - are not and *should not* be necessary to deploy and maintain standard applications. Containers give you a pain free way of achieving this goal.

          Combine this with the fact that configs are text files, throw in CI/CD chain and you can manage and roll out identical or unique configs to multiple "servers"=containers running on multiple/same physical nodes and so on without breaking a sweat, if you throw in tools like Ansible and GIT into the mix, you're as good as gold per scalability and rollback-ability.

          Again my apologies if I offended or misunderstood you, but I stand by my opinion that containers provide real value beyond "buzzword bingo".

          I have been maintaining my own mini cloud environment for quite a few years now and PHP dependency hell used to be a thing until I switched to containers. And all of a sudden, Et voila! each container has it's own PHP version/environment and zero conflicts with other software packages requiring different php versions.

    2. teknopaul

      Re: No more managing operating systems and monolithic apps

      You lost me. My phone and data center servers are computers running Linux, running containers running apps.

  4. Warm Braw

    If you had to start from scratch...

    ... imagine how many valves and relays - or even gearwheels - it would take to replicate our current processing capacity. Which is why we move forward from where we are, even if it sometimes involves retracing our steps a little.

    Virtual machines actually solved two problems: one was allowing "fractional" workloads to be amalgamated, the other was allowing heterogeneous systems to be consolidated. There will likely be less demand for the latter as on the one hand Linux becomes even more ubiquitous and on the other the x86 hardware architecture is no longer the common denominator for deployments and source-level compatibility will be seen as key.

    However, I'd expect to see more VM-like techniques being used to harden and simplify containerisation and even to facilitate the mobility of workloads between different hardware architectures as they likely become more diverse and having multiple compiled versions of containers becomes a pain.

    It's the old story of computer evolution: the same principles reappear in different packaging.

    1. Charlie Clark Silver badge

      Re: If you had to start from scratch...

      It's almost as if BSD jails and Solaris containers had never existed…

      1. ItWasn'tMe

        Re: If you had to start from scratch...

        Namespaces anyone?

  5. This post has been deleted by its author

  6. katrinab Silver badge
    Meh

    If the only benefit of containers over VMs is that they use fewer system resources, then I don't think that means containers are the future. Hardware gets more powerful every year. Nowadays that generally means more CPU cores rather than actually running faster, but more CPU cores is perfect for a VM deployment. The overhead for FreeBSD or Linux isn't that high anyway. Windows is a bit more, but even there, in the overall scheme of things it isn't that much, and the benefit of being able to reboot each VM individually I think overcomes that.

    1. Charlie Clark Silver badge

      If the hypervisor does its job correctly there really is almost no difference between a container and a minimal VM. As with all shared systems, genuine isolation of processes becomes key, and IO contention is always an issue.

      1. Michael Wojcik Silver badge

        Under some conditions VMs beat containers.

        1. W.S.Gosset

          !! speed

          > LightVM can boot a VM in 2.3ms, comparable to fork/exec on Linux (1ms), and two orders of magnitude faster than Docker.

          That's amazing.

          And that's using Xen, which itself has only a 3% performance penalty vs bare metal for the started process.

          So... better than Container speed, plus all the residual capabilities of full OS. Or am I missing something?

  7. fwthinks
    Thumb Down

    one sided

    The problem with the argument, is that it is one sided - only looking at the issue from an application perspective. Containers do not run in the ether, they run on operating systems and physical hardware. So containers may make life easier for an app developer or app support team, but not from an infrastructure perspective. Even for virtualization today like ESX, the underlying hosting platform can be very complex to deploy and manage, but the benefit is that it reduces OS support costs. In a large enterprise with a wide variety of apps to deploy, I do not believe that containers add any additional cost benefit from an infrastructure perspective - as they add an additional management layer which somebody has to manage.

    1. Anonymous Coward
      Anonymous Coward

      Re: one sided

      No, no, no, no, they run in the 'cloud', the cloud is magic.

  8. drtune

    Horses for courses

    1. Anyone not building new systems on containers needs their head examined or to be shown the door.

    2. Converting an existing system to a stack of containers is {some amount of work, varies wildly}; if you're lucky it can be a relatively modest reconfiguration.

    3. If you're unlucky, the task can be practically impossible - or more work than you can be arsed with - in which case a VM is a pragmatic but less efficient compromise; there are some corner cases where VMs have special properties you want despite the efficiency loss.

    4. There is no four.

  9. Rocco Granata

    Be careful with analogies

    I love to read TP Morgan. To my understanding, here he is advocating one approach in a debate, so is naturally a bit biased.

    It is true that the time has come to have many new applications running in containers, orchestrated by Kubernetes or ECS (AWS specific flavor of container orchestration). It is also true that simple analogy with how virtualization entered enterprise computing (it was embedded on mainframes for decades) and workloads moved from physical to VMs does not hold for transition to containers. I successfully virtualized many applications without a single code change, usually even without any support from their developers. Can you do that way moving into containers? (Hint: 9 out of 10 times, no. Out of these 9, 7 can run in container, but you will be sorry you did it as it brings more problems than benefits and support and troubleshooting becomes harder). We all know that business application is a king. There are still COBOL applications running. It is unrealistic to expect that all J2EE, .NET etc based application made in last two decades will be reengineered only to fit into containers.

    The sweet spot for containers are stateless workloads where one may need rapid scaling up or down (in matter of seconds). That is usually what very large and Internet companies need. SMEs, not so much. For stateful applications they bring challenges, one of which is more variable latency in I/O.

    It will be interesting in next few days to read articles written from different view points.

    1. Charlie Clark Silver badge

      Re: Be careful with analogies

      The containers approach is appealing if you want to be able to run lots of copies of the same software in isolation, for example content filters on a network. As soon as you start composing your containers you're effectively configuring VMs.

      The microservices approach is what will trip a lot of people up. It sounds so good, and generally is too good to be true: you end up with a heap of difficult to maintain config files and containers using quite a lot more resources than you thought: if you're not Google you probably want as many components of your applications to run as close to each other as possible as sharing data down the wire, potentially between disparate data centres, rarely makes sense.

    2. W.S.Gosset

      COBOL

      A little-known aspect of COBOL is that it runs in effectively its own VM (which is how it achieved its instant portability, and hence its popularity, on widely disparate mainframe architectures).

      Or at least, that's how its creators described it originally -- its layering might today be called a Container.

      I'd be interested to hear a hardcore oldskool COBOLer weigh in, there. Jake maybe?

  10. amanfromMars 1 Silver badge

    The Party View from an Honourable Member of the Opposition

    Containers will kill virtual machines

    Yes, there will be those who think that, but is both misguided and extremely dangerous, with the reverse much more likely as it be naturally guaranteed because of what happens in the depths of the processing of prime novel instruction sets in support of venerable vital and virile and venal virulent viral virtual machinery systems.

    What one would be struggling against and trying to erase and move on from is popularly typified and more clearly illustrated in the containers of a Belmarsh and a corrupt and perverted elitist justice system's detention of the virtual machine, Julian Assange, and its earlier clone in the introduction of arbitrary internment without trial of suspected nationalist republicans in the North of Ireland whenever prime novel instruction sets are ignored or arrogant thought best penalised.

    Troubles which can easily ignite and fan the great purging flames of violent and deadly revolution be guaranteed, which open up more cans of rotten worms for destructive overwhelming viral attack and virtually anonymous autonomous assault.

    Crikey, that should be headed ..... The Immortal Rise and Impregnable Rise of the IntelAIgent Virtual Machine ...... although that would suggest, rather than it being an ancient reality for current and future presentation, it is something yet to show form and come. That would be terribly misleading and not do the situation justice and itself proud.

  11. Anonymous Coward
    Anonymous Coward

    Meh.

    Definitely horses for courses.

    Whatever happened to picking the right tool for the job? Half the mess I see is down to some PHB deciding to use a technology 'because it, based on the sales blurb, saves money'.

    Like the time someone decided to put VMWare on a meaty system that could only ever run ONE VM at a time (because of the software requirements). Times that by 8. The VMWare licensing cost was insane - and don't even get me started on the issues around number of devices that could be attached.

    Or the time, on a course that touched on microservices, the instructor gave a nod and a wink saying 'Yeah, microservices mean you can develop and deploy stuff without having to speak to the security or infrastructure teams'. That just left me wanting to keep that far away from anything important.

    And who can forget the time where a container just stopped working (due to someone else randomly changing something on the host in the belief it would make *their* job easier). Cue container misery...

    Just wait until people begin to treat the infrastructure their software runs on in the same way as they treat their support contracts. Lets listen out for the PHB's wailing when it comes to wanting to move their workload to a different provider and they get the bill for the migration (the amount of wailing will be proportional to the amount of data to be migrated).

  12. thondwe

    Anyone remember Mainframes?

    Back in the day we have one big machine which successfully ran lots of programs for lots of people (safety and securely?). Then people wanted their own little computers.

    VMs and Containers just try and re-image the mainframe world with lots of little computers tied together with string (Ethernet usually, unless it's for HPC)

    Anyone want to start again with a clean sheet?

    1. Roger Kynaston

      Re: Anyone remember Mainframes?

      I have long thought that from when the place I was working first put in mainframes. Oddly, it was only a couple of years after they decommissioned their ICL mainframe.

      I quite look forward to booking time on the processor queue otherwise known as the cloud.

    2. DougMac

      Re: Anyone remember Mainframes?

      "Back in the day we have one big machine which successfully ran lots of programs for lots of people (safety and securely?). Then people wanted their own little computers."

      "VMs and Containers just try and re-image the mainframe world with lots of little computers tied together with string (Ethernet usually, unless it's for HPC)"

      VMs _started_ on mainframes, in order to partition all the resources into usable chunks while separating important processes. Much as they do now.

      They are not just trying to emulate mainframe environment with enough resources.

      1. Michael Wojcik Silver badge

        Re: Anyone remember Mainframes?

        VMs _started_ on mainframes, in order to partition all the resources into usable chunks while separating important processes.

        And specifically to provide "personal" computers, under CP/CMS and then VM/CMS. The virtual machine concept was developed by Creasy and Comeau in late 1964 as part of the design work on CP-40, as a scheme for user isolation. Melinda's paper is the canonical history.

        So, yeah, everything old &c.

  13. stewwy

    VHS vs Betamax all over again.

    But seriously,

    one of the reasons VHS won over the technically superior Betamax was that porn was unavailable easily on Betamax. However, in areas that required hi-fidelity such as broadcasting, Betamax persisted until 2002, and VHS until 2003

    My expectation is that both will continue until they are killed off by something better. One will be used more generally, Containers probably, and VM's used where security is more important than easy of use.

    1. This post has been deleted by its author

    2. W.S.Gosset

      Re: VHS vs Betamax all over again.

      Even sillier than that: Betamax threw away the bulk of the market because its retail-available blank cassettes were too short to allow taping any normal programme. If you wanted to tape anything yourself, you had no choice but to buy a VHS.

      Entirely their management choice; correctable at any time by simply making bigger cassettes for retail.

      They ignored it.

  14. DougMac

    I wish I could find the cartoon on the origins of containers

    Its a dev and his boss.

    The dev is trying to explain that sysops are dumb because they can't figure out all the correct versions of packages that play together well that the developer used to develop his app.

    Boss's solution is that we ship the dev's computer out to run the app instead.

    1. bombastic bob Silver badge
      Devil

      Re: I wish I could find the cartoon on the origins of containers

      couldn't you do the same thing by imaging the dev's computer into a VM?

      1. Anonymous Coward
        Anonymous Coward

        Re: I wish I could find the cartoon on the origins of containers

        P2V

  15. W.S.Gosset

    Xen vs VM overhead

    > even if VMs are a bit heavy in terms of server overhead

    Exception: Linux on Xen runs at only a 3% penalty to the bare metal.

    Truly amazing.

This topic is closed for new posts.

Other stories you might like