back to article Red Hat and dotCloud team up on time-saving Linux container tech

Red Hat is working with startup dotCloud to co-develop new Linux container technology to make it easier to migrate applications from one cloud to another. The partnership was announced by the companies on Thursday, and will see dotCloud's open source "Docker" Linux-container technology get enhancements to work with Red Hat's …


This topic is closed for new posts.
  1. Nate Amsden Silver badge

    what kind of iterative development?

    "not particularly good for iterative development."

    What the hell is that supposed to mean. VMs for the most part have been great for almost all kinds of development. It's a really simple concept, allocate some CPU resources, some memory, and off you go. Sure there is some overhead but for the most part that doesn't matter (especially in development). Overhead is scrutinized more I suppose in the realm of cheap crap web hosts where they cut every possible corner to give you a slice of their stuff for $2/mo.

    I've never used the VMware Lab manager product(never saw any value in it), I don't make extensive use of VM snapshots (except for things like OS upgrades), but being able to have a consistent interface regardless of underlying OS is obviously nice to have.

    For the real world though VMs are fine for all but the most extreme circumstances. The overhead incurred is worth it 10x over, really. I mean I remember back in the earlier days of VMware GSX nearly a decade ago where VM overhead was much higher, and we only had single core CPUs, and 16GB of memory was considered a lot.

    It was still a critically useful technology to have for development (I deployed my first production VMware on a GSX 3.0 back in 2004 - it was a stop gap because we didn't have a half dozen physical machines to deploy this last minute application for a customer(the original plan was to deploy new code to a larger production cluster and share it with the new customer but there was bugs in the code and that plan had to be scrapped at the 11:59 hour). So we snagged a 2U dell box that the devs were using to develop this app stack on and shoved it in production in ~72 hrs(much of that time spent configuring and testing the application). The box took more load in 24 hours than the customer expected it to take in the first 30 days. Naturally the app blew up, there wasn't enough CPU to drive all those TRX on such a shitty application. But we managed to get through the days until we could add more capacity (in the form of more web servers outside of the VM host). The VM host lived on for a good 6 months or so before we retired it in favor of more modern physical hardware.

    I sat through a presentation on the Redhat platform stuff I believe it was last December, and couldn't help but think how much waste it had - specifically with DB servers, they were seemingly advocating deploying database servers like you would web servers. Instead of consolidating onto fewer, more powerful DB servers that are better optimized(DB caching specifically). The approach was "interesting" but not something I can ever see getting behind myself (much rather use VMs).

    The use of containers (ala Solaris containers, or FreeBSD jails that some folks used to promote) is cute, but that's about it - IMO of course. Like most people I'm not in the business of racing to the bottom, or in one user's words from my first job in 1998 "squeezing every last ounce of megahertz" out of the system).

  2. ecofeco Silver badge

    VM may be obsolete?! Don't tease me like this!

    "This approach beats VMs in terms of resource utilization, as the OS copy is shared across all apps running on it, whereas virtual machines come with the abstraction of separating each OS onto each VM, which adds baggage."

    See, I KNEW there was something I didn't like about VM, especially for desktops

    I mean besides getting the VM admin to admit a user's account was broken and they need to create a fresh one. (or the company PTB deciding to VM inside a VM. Yes, I've seen it.)

  3. Anonymous Coward
    Anonymous Coward

    LXC rules

    At work we now use containers for nearly everything. For web server instances, a single container manages to handle about twice the sessions per second as it would do on a KVM instance with the same number of cores and RAM.

    The overhead is so minimal that we now also run our MariaDB instances in containers and make many more, smaller database servers.

  4. wheelybird

    But what does it *do*?

    Linux containers are a great way to run multiple Linux servers; you avoid the emulation layers that VMs require so the end result is that your container runs faster than a VM on the equivalent hardware. You can do a number of interesting things with the way you set up containers using features of the Linux kernel to allow for really easy cloning of containers and other fun things - e.g. containers on LVM2 or BTRFS.

    I've come across Docker before and I can't quite see what it's offering that doesn't already exist when you use LXC intelligently. As far as I can work out, it's just allowing you to create the equivalent of virtual appliances that you can get from places like the VM store. That's fine, but that's just packaging tools and templating - they're not developing LXC itself. From the article it sounds like they're almost claiming that Docker's the only thing that makes containers useful, and that's a bit cheeky.

    Incidentally, for those interested in playing with containers, I'd recommend trying them out on Ubuntu, as they've put a lot of effort into making it easy to create and manage containers (especially with using Ubuntu guest containers).

    A final thought, I'm not sure why the article was banging on about bringing the containers down to upgrade the kernel. I can't think of that many instances where software development depends on a specific kernel version in the first place, but if that is important then KSplice addresses that particular issue - kernel upgrades without a reboot.

  5. hayseed

    Libvirt and lxc, not libvert

  6. Kebabbert

    Clone of Solaris Containers

    "....This approach beats VMs in terms of resource utilization, as the OS copy is shared across all apps running on it, whereas virtual machines come with the abstraction of separating each OS onto each VM, which adds baggage...."

    Solaris Containers have done this for ages. Linux is cloning Solaris tech again, as cloning ZFS, DTrace, SMF, Crossbow, etc.

    One difference to Solaris Containers, is that Solaris allows virtualization of different kernel versions. You can even install Linux in one Solaris container. So there is only one Solaris kernel running, and all other virtualized kernels just remap API calls to the single Solaris kernel. One Sun guy started 1.000 Containers on a 1GB PC Solaris, it was very slow but it worked. Each Solaris container uses something like 40MB RAM and 100MB disk space, by cloning som kernel data structures. They are very efficient.

    And now Linux is getting them too.

This topic is closed for new posts.

Other stories you might like