sigh, more NIH from the Ubuntu stable
do they really have to re-invent everything?
Oh, sorry this is all part of their plan for world domination.
Canonical, the company behind the popular Ubuntu Linux distribution, says it's working on a new "virtualization experience" based on container technologies – but just how it will operate remains something of a mystery. Canonical founder and erstwhile space tourist Mark Shuttleworth announced the new effort, dubbed LXD and …
do they really have to re-invent everything?
Oh, sorry this is all part of their plan for world domination. Anonymous Coward
And/or DOMinaMatrix Cyberspace Command and Control of Virtually Real Event Horizons, AC, which practically effortlessly delivers remotely and relatively anonymously and autonomously all perfectly enough planned parts of universal domination to ..... well, Greater IntelAIgent Gamesplay is the New Orderly World Order function for Stealth and Steganography and Security in Especial and Secret Intelligence Services to Apply and Deploy to Mentor and Monitor Systems and SCADA Executive Administrations and Governmental Bodies within the Stellar Satellite Office Envelopes [Expandable Migrating Containment Cells] of the Live Operational Virtual Environment.
And with its IT made maddeningly easy for the intellectually challenged and disenabled to micromanage Multiple Stream Media Hosting engaged to follow and driver the simplest of formulae for complex macros .... Consult, Design, Develop, Deliver, ..... is it quite a true no-brainer of a smarter intelligence led opportunity to excel with experimental experience at novel and noble exercise of excellence for enhanced existence.
And, to be quite perfectly blunt and honest about the matter, and about such matters as are becoming increasingly prevalent and all powerful and tending towards Omniscient Proaction with NEUKlearer HyperRadioProActive IT [all patents pending and trade marks copyrighted with copy left sources protected and secured/Deep Dark Web Vaulted], are such as can also be highly disruptive and constructively destructive ZeroDay Vulnerability Exploits which Export and Support Intangible Sorties and Anonymous Invisible Attacks on Virtualised Inadequate Defensive Forces with Perverse Corrupt Compromised Sources also readily available for Excellent Agents to Deliver Success and ..... well, AIMagical Change is not an idle boast whenever a true reality.
No, it's not paravirtuali[sz]ation — It's based on containers. Think vserver (hot in ~2007-2009), openvz (did it ~2009-2012) or lxc (Linux Containers, in the main Linux tree for quite a long time. I'm using it ever since). Now... What I fail to understand from the summary is how it differs from lxc (besides substituting a 'c' for a 'd'). If it's taking Docker and making it go for a full Linux OS instead of just an app... It's just peeling a layer off it, and going back to the regular lxc?
If I read article correctly, a Solaris container type of OS instance was described. Whether a sparse instance or a full copy via a snapshot filesystem was not clear. If it works as well as Solaris containers it is a good way to go. A Linux application container has its uses, but sometime users want the whole experience. Developer for instance.
Linux containers are a clone of Solaris zones functionality. Systemd a clone of SMF. Let's party like it's 2005 (or 2006 if you count btrfs/ZFS).
All jokes aside I recently went on some RHEL training and was surprised at how "Solaris 8" RHEL 6 was.
I started writing a long technical explanation, but I'll just stick to the simple basics. Docker has been getting a lot of press lately. However, Docker and Kubernetes (as referenced in the article) operate at a much higher level than the actual containers. Docker in particular has a very limited app format which is intended to be "cloud friendly". You can do a lot of things with the actual containers that you can't do using Docker.
Containers have been in Linux for years (a lot of public web hosts use them, and everything in Google uses them), but they have been hard to set up unless you really knew what you were doing. What is happening recently is the development of software to make them easier to use.
Canonical's vision of containers involves letting the user do a lot of stuff that Docker can't do. Canonical has long had a lot of server management, deployment, and provisioning software oriented around managing both VMs and raw hardware. So far as I can tell, their new approach simply extends this to using containers. I think they're right in that the Docker "micro-service" approach isn't the answer to everything.
As for the vague "it's a hypervisor, no it's a container" talk, I think that has to do with new features in chips which will bring better security to containers. I don't know what that would be, but that's the impression that I get. It is worth noting that Canonical has been doing a lot of work with IBM on supporting IBM's Power systems. It is possible that something is happening there.
As for running one Linux distro inside another Linux distro, that isn't new. In fact a lot of kernel level development used to get done that way before the KVM hypervisor was integrated into the standard kernel. They didn't even need containers to do this.
And of course, since Docker is an app delivery format that uses containers, not a container system, there's no reason why you couldn't run Docker on top of Canonical's containers while simultaneously using traditional "full size" server applications in containers as well.
Or to put it another way, Docker isn't the answer to everything, and neither is sticking everything into AWS. Canonical wants to provide all the advantages of "cloud" type push button deployment and manageability (start up, shut down, and migrate services quickly on demand), but operating on your own hardware in your own data centres.
I don't know whether they can pull this off. However, server and cloud is most of what Canonical actually does, even though the desktop and mobile end up getting most of the publicity.
I'm wondering if they aren't using something like a chroot jail. In a classical unix-style chroot jail, you can install just an individual program but need *all* libraries and config files in the jail to make everything work. This minimizes the exploit surface since if one exploited the running service, there's no shell, no wget, almost no libraries (possibly not relevant if the exploit is statically linked), and usually in this case the service is started directly in the chroot, so there's no "init" or bootup process to "infect" and make a rootkit or anything persistent.
Several distros can run for sure within a chroot jail, and it would be restricted to Linux-on-Linux usage, so it matches the restrictions on the technology they suggest.
*But*, if the chroot has a /dev with /dev/sda etc., and it can have full access to the hardware. There's no cpu limiting in the classical setup, and also the chroot would use the regular network interfaces. I wouldn't consider this alone to be too suitable to use for arbitrary distros. However...
Throw in some "magic" to use the facilities already in Linux, and you could have a chroot that can (if you want) run the init for the distro so it has a normal desktop environment, set up a "virtual" network card for each chroot if you want or share the interface (your choice), rate limit of network and disk, cpu scheduling and limiting per-process *or* per chroot (or mix-and-match) as you wish. I would trap access to some devices so you can virtualize just the audio, avoid access to the physical disk. There are utilities that do some of this at least already, edit: Thanks Gunnar wolf, I couldn't think of the names of any 8-) "Think vserver (hot in ~2007-2009), openvz (did it ~2009-2012) or lxc (Linux Containers, in the main Linux tree for quite a long time. I'm using it ever since"
In most distros, the initrd or initramfs sets up /dev, makes sure the disks are mounted, loads kernel modules; things that have already been done in this case. So in general they could skip that part of the boot and continue right after that if they want to boot a whole distro.
Without looking into the implementation this'd be my stab on how to do it. You'd be using native kernel facilities with no overhead whatsoever, but still have the types of control one typically gets by running stuff inside a VM or under a hypervisor.
The "magic" you've just described is what containers are. They're chroot jails *plus* all the stuff which jails don't cover.
There's no single thing which enables containers. It's the sum of a lot of little features in the kernel, which took years to create. The result has been that there have in the past been partial implementations of containers which had limitations. As the last remaining holes got closed off containers became more popular, until we have the situation today where they're the hot "new" thing (although they're not actually new).
Part of the confusion that is going around is the way that the press is reporting Docker as providing the containers, when in fact it's just a way of deploying applications into containers. Docker has definitely tapped into the PR gold mine.
Canonical is also using containers. They are just allowing containers to be used like VMs, while Docker apps are supposed to be in a special format with limited capabilities. A Docker app's limited capabilities are supposed to make it easier to manage. Canonical on the other hand deals with the management problem by providing a big stack of sophisticated management software.
There is a lot of confusion here because of the messy evolution of Linux containers. Linux containers (LXC) depend on cgroups and namespaces suport in the kernel, developed to support containers, which have been baking since 2009. Canonical has supported the LXC project since 2012.
LXC containers are a lightweight and portable alternative to virtualization that operate at near bare metal speeds, and allow you to run multiple Linux OSs in your host Linux OS. So for instance you could be running a Debian host with multiple Centos, Fedora, OpenSuse containers and vice versa. The real biggie is these containers are portable and can be moved easily across any system that supports LXC, basically any Linux host since LXC is part of the kernel.
For those out of the loop Canonical has been the main supporter of the below the radar LXC project, that companies like Docker used to propel themselves in to prominence. Yes Docker was based on LXC - the canonical sponsored project, and used LXC containers as a base to abstract the container away to a single app. So its an app delivery platform, compared to LXC which gives you a complete Linux environment. For most users containers as a lightweight alternative to virtualization makes sense more than a single restrictive app delivery platform which is suitable for PAAS type scenarios.
Because of LXCs low profile a lot of users first introduction to LXC containers was via Docker resulting in some misconceptions and conflation of a single restrictive use case of containers to container technology itself. And with it a growing ecosystem of projects that paradoxically try to break through some of these self imposed restrictions and do not support LXC itself.
The fact that LXC is a full fledged Ubuntu project that gives you tools for container management, a wide choice of container OS templates, superfast lightweight virtual machines based on containers and was actually easy to use, was lost in the noise.
With LXD Canonicial seems to have finally woken up to the potential of its own LXC technology, and hopefully they will promote and evangelize it properly. Or someone will put a wrapper on LXD and run away with the momentum.
Biting the hand that feeds IT © 1998–2020