back to article CoreOS bags $12m, touts Tectonic – a DIY Google cloud for big biz

Container-happy Linux upstart CoreOS has launched a beta program for a new distribution of software designed to let enterprises run their own infrastructures the way large-scale software companies like Google run theirs. Dubbed Tectonic, the new offering combines the lightweight, container-centric CoreOS Linux distribution and …

  1. This post has been deleted by its author

    1. Destroy All Monsters Silver badge
      Paris Hilton

      I guess it's more "lightweight" (regarding space and indirect calls till you hit the hardware)?


      I am at the point where "exciting" makes make throw up...

    2. Anonymous Coward
      Anonymous Coward

      1. Management. Imagine fifty servers each hosting a Linux OS and KVM, and running five Linux guest VMs in each. Now suppose I need to patch bash against shellshock. That's potentially three hundred invocations. With containers I only need patch the fifty servers. Yes, systems are scriptable nowadays, but less is more, period.

      2. Performance. Why should I run my app on top of Linux on top of Linux if I can run my app on top of Linux and get "good enough" isolation?

      1 and 2 mean real money savings for people like with with lots of apps to run.

      The irony of all this being essentially technology from IBM circa 1971 is also delicious. MVS, LPARs, etc. -- all ideas that the cool kids are rediscovering for themselves forty years later. I would be great to have El Reg interview some of the old IBMers...

      1. Anonymous Coward
        Anonymous Coward

        > The irony of all this being essentially technology from IBM circa 1971 is also delicious. MVS, LPARs, etc. -- all ideas that the cool kids are rediscovering for themselves forty years later. I would be great to have El Reg interview some of the old IBMers...

        Aint that the truth :D

      2. Lusty

        While you're right that there are fewer systems to manage and patch, you're wrong to think that normal enterprises want this. Change control means that anyone without an army of coders writing bespoke software for all the company workloads will want to patch each workload individually in a controlled manner to reduce business risk. This works for Google because they have numbers on their side, so each hardware image probably does only run one workload but maybe 50 instances of it. They would have sufficient hardware to allow failure of many hardware servers though so patching may not be an issue. They also have robust procedures for managing those changes while most enterprises do not, and backing out is often chaos.

        The technology here and in virtualisation may be based on old technology but there are a significant number of new tricks which those old platforms can't do. There are also a number of things which make working with them unpleasant compared to the newer copies with updated tool sets.

      3. This post has been deleted by its author

        1. Anonymous Coward
          Anonymous Coward

          Re: #2, performance, my personal experience has been that there are three axes: instantiation time, cpu/mem utilization during running, and network performance. Instantiation time is better on containers, but doesn't matter to me as my apps tend to run for months at a time. CPU/mem is a wash - actually, the long pole in the tent for me is filling the host CPU's cache and making sure my guests (VMs or containers) always have the data they need in the cache. Network performance is a big problem for VMs relative to containers. There are hacky things like SR-IOV but they don't feel "right" (although they work well). Containers essentially give you bare metal speeds as they don;t have the encapsulation overhead. 'Course, container networking is strange at best and downright perverse at worst, so you have to be prepared to think in different ways if you go down this path, but it can be worth it for the network speedup alone.

        2. David Dawson

          The benefits of containers are really twofold, one is efficiency for ops, the others is standardisation for development.

          For ops, containers really can be seen as just the next step in virtualisation. They give lower isolation guarantees than VMs, which in turn give lower guarantees than bare metal. Containers give much of the same benefits as VMs too, potentially denser deployment of software.

          This density can be seen in the lower overhead they have as compared to VMs

          Memory overhead of just booting a VM on vsphere (ie, before the OS is loaded)

          Comparison of VMs and containers (PDF)


          Overall, containers have a lower penalty on CPU usage, and a much lower overhead on memory usage, as the guest OS and hypervisor penalties are removed. This comes at the cost of using linux as the host and overall lower isolation. It's a trade off. For the linux as host point, it has a larger surface area to attack as compared to VM hypervisors.

          For development, the container acts as a standardised deployment artifact, that is much, much, much (really) smaller than a VM image. It'll effectively be the application binaries, with supporting scripts. The lower levels are stored as seperate portions and downloaded separately.

          They are a good tool, and not a replacement for VMs. Instead, it let's use be a bit more nuanced in the way things are done. They certainly will replace VMs in many situations, but by no means all, and probably not the majority, in my opinion.

        3. Anonymous Coward
          Anonymous Coward

          RE: Performance

          >Anon because I'm at work.

          The only one I can speak to is performance. I spend a lot of time using Proxmox which is just a nice web interface for Debian with both KVM and OpenVZ installed. Performance vs bare metal works like this (your mileage may vary depending on what those VMs/Containers are up to, my stats are for primarily LAMP "servers"):

          KMV/QEMU (VM) - about 10-15% slower than bare metal, and there is notable memory/CPU overhead

          OpenVZ (Container) - about 3-4% slower and there is no perceptible overhead

          So 10-15% or 3-4% more latency and fewer total sessions before things bog down. This roughly lines up with most of the literature I've read so I think my results are pretty normal. Honestly, I'd almost rather managed a bunch of microservers but that's another story. Also keep in mind that in most cases it will be "disk" I/O that slows things down, not your choice of virtualization strategy.

          1. Anonymous Coward
            Anonymous Coward

            Re: RE: Performance

            Yawn, freebsd and sun have been doing these properly for years, long before linux got in on the act.

            What's with the 9/11 'hero' pic anyway? :-(

      4. Ilsa Loving

        There is one important differences...

        You don't need to spend several million dollars to get said features.

    3. Tom Maddox Silver badge

      Great tagline

      Virtualization: it's like a mainframe, except cheaper, multi-vendor, modular, and distributable.

    4. Ian Michael Gumby

      @1980's ... Yeah but you have to like the name...

      Hmmm what shall we call this "disruptive" company that is going to "shake up the IT world?"

      Tectonic ... try and copyright it. :-P

      1. Lusty

        Re: @1980's ... Yeah but you have to like the name...

        "Tectonic ... try and copyright it. :-P"

        Why would there be an issue copyrighting the word Tectonic in the specific realm of IT unless it's already taken? Or have I missed something funny?

        1. Anonymous Coward
          Anonymous Coward

          Re: @1980's ... Yeah but you have to like the name...

          > "Tectonic ... try and copyright it. :-P"

          I think the word you're looking for is "trademark".

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like

Biting the hand that feeds IT © 1998–2022