So a unikernel
Is basically one huge binary with everything including programs compiled in that presents itself to the user as a normal OS. Sounds like a ROM based 8 bit computer.
Linux container biz Docker has bought Unikernel Systems, a startup in Cambridge, UK, that's doing interesting things with roll-your-own operating systems. Rather than build an application on top of an OS, with the unikernel approach, you build your own tiny operating system customized for your application. It's quite a coup …
This was rather my reaction. When personal computers first arrived on the scene it was received wisdom that they didn't need much in the way of an operating system. Operating systems are for sharing - to make sure no-one hogs the resources or trashes someone elses files - and since PCs weren't shared, operating systems weren't really needed. Complicated, interrupt-driven stuff that was hard to write and debug was particularly frowned on - it didn't matter if you were wasting CPU cycles polling for I/O devices to be ready because, well, what else was the CPU going to be doing?
These days we have desktop computer operating systems that largely replicate those of old-fashioned timesharing mainframes, even though our desktop computer systems remain single-user devices. And we have them because they're necessary - we don't want to sit unproductively while the printer is in use or have to close and save our spreadsheet before we can access the Internet. And we don't want a badly-written application corrupting the data in another.
There's a good argument that we don't have enough functionality in our operating systems - having simply borrowed from established technology rather than inventing anything radically new, they're not a lot of use at protecting us from malware and spying because their heritage is protecting users from one another, not protecting users from their own unwitting actions.
As long as you're sharing resources and don't want to devote massive resources to testing applications and their interactions with each other you're going to have to have something that looks very much like an operating system. You might call that operating system a "hypervisor", but the hypervisor plus the application space bits are inevitably be functionally equivalent to an operating system. Sure, you can optimise the context switching, but if you want to get TCP/IP out of the OS altogether, for example, you can't safely ensure that different applications get only the traffic for their ports. Of course, if you give every application a different MAC address, the kernel doesn't have to do that multiplexing, but that's not a gain that comes from moving code boundaries around, it's a gain that comes from a fundamental change in the application model.
In short (at last), repackage the functionality however you want, but it's not going away and anyone who says it is is trying to sell you something.
It also implies that everything inside the application runs with the highest user privilege levels. That may not be the case - in some applications each process/thread may run at different privilege levels (i.e. threads impersonating the calling user) - to achieve this you may need to fire even more containers and maybe exchange data among them. This model may work for some applications, not for many others. But given most developers run everything at the highest level they could so it's easier to code, I won't be surprised if this stuff becomes common.
Moreover instead of updating a single set of kernel and user libraries, now you need to redeploy each container to patch them.
Moreover instead of updating a single set of kernel and user libraries, now you need to redeploy each container to patch them.
you have to deploy a fix as well, the difference that you are very likely to have installed new binary instead of having built it, while it seems to me (hunch really, I might be wrong) that you are more likely to build unikernel binary yourself. So basically the difference is in how you deploy the new binary, and how intrusive this might be to users.
"
Is basically one huge binary with everything including programs compiled in that presents itself to the user as a normal OS. Sounds like a ROM based 8 bit computer.
"
Which is all that you need on a device that has dedicated functionality such as a router, set-top-box, smart heating & lighting control systems etc. Or a PC that you are *using* for a dedicated function.
"Sounds like a ROM based 8 bit computer."
Assuming it is running on some kind of hypervisor, a better analogy would be a process on an operating system that properly isolates processes from each other. See also http://www.catb.org/jargon/html/W/wheel-of-reincarnation.html for other examples.
This model may even reduce the number of security vulnerabilities present in the software: less code means fewer bugs.
Not quite. Fewer lines of code mean fewer opportunities to make mistakes and an easier time for those reviewing it for errors. It is possible to cram in many, many flaws into a small number of lines (think IoT programming). Also, if I understand how this is supposed to work, I believe it might re-introduce flaws that were previously done away with (perhaps something like a Ping of Death) or allow ownage more easily through an exploit against the application.
Unikernels sound like a poor man's way of re-architecting Linux so that the TCP/IP stack doesn't run in a different privilege level to the application using it. Rather than fix the kernel / TCP/IP design, just shove the application into the kernel and hope for the best. This is not a problem that other *nixes and OSes have.
Meanwhile, whilst the unikernel is managing memory, storage and NICs that it 'thinks' exists, the burden of controlling real access to real hardware is dumped onto the hypervisor, and the layers between applications and hardware are fatter and more numerous than if you just ran the app on a bare metal OS. For instance, it can't be efficient to have a bunch of virtualised Translate Look-aside Buffers when it's hard enough for the chip designers to make the real one in the CPU work well.
You could have a even slimmer unikernel that simply passes, say, memory allocation requests directly through to the hypervisor, but then the hypervisor is the OS.
The net result is that a unikernel is like an unnecessarily fat application that duplicates a lot of OS things running on an already quite fat hypervisor that is also having to do a lot of OS type things. That's not so hot if your biggest business expense is electricity. And unless I'm very much mistaken there's nothing to suggest that a hypervisor is going to be more or less "secure" than a well architectured OS.
So, it's inevitably slower than apps and services running on a well architected bare metal OS, and there's no reason to suppose that it is any more or less secure than any other OS. If you don't trust your OS (of any flavour), logically you cannot trust your hypervisor either.
Also what happens if you discover a bug in a shared library that's been used, or in the kernel itself? You have to open up every single unikernel you have, update the library or kernel as required, and then get it running again. That sounds like a lot more work, and like a lot of work to somehow magically automate. It's terrific if there are already tools to do this for you, a pain in the arse if there aren't. It's certainly not as simple as "apt-get upgrade".
</rant>
Real systems guys do it bare metal.
With Xen (like a lot of hypervisor platforms), the "NICs" and "storage devices" are just message passing drivers back to the hypervisor. There is no emulation overhead, just overhead of the message passing (eg, context switches).
This also isn't just about fixing TCP/IP by a long shot.
With Xen (like a lot of hypervisor platforms), the "NICs" and "storage devices" are just message passing drivers back to the hypervisor. There is no emulation overhead, just overhead of the message passing (eg, context switches).
But the whole blurb about the wonders of unikernel was supposed to reduce context switches...
"the whole blurb about the wonders of unikernel was supposed to reduce context switches..."
Shhh. The chances of a few decent unikernel IPOs are already limited enough due to the market in general, without people like you adding actual technical facts to the picture.
[Edit: although, didn't QNX do message passing without context switches in the right circumstances? And if they could...]
[Edit: although, didn't QNX do message passing without context switches in the right circumstances? And if they could...]
I had some fun with QNX in the 80s but haven't really kept up with where it is today. I kinda forgot about it until Cisco built its IOS-XR on it and even then had just a cursory look at it.
As I recall they bypassed needing context switching between threads with some protected shared memory, and also when context switches are needed, they are very fast in QNX.
Unikernels sound like a poor man's way of re-architecting Linux so that the TCP/IP stack doesn't run in a different privilege level to the application using it.
It seems rather reminiscent of User-mode Linux...
This is not a problem that other *nixes and OSes have
This is not a problem that Linux has; it is someone's idea of an improved way of delivering services. I have to say, I'm unconvinced...
Vic.
this hardware that the apps can now talk to directly in line with this month's fashion: is it by any chance the kind of stuff that used to potentially be shared between apps, e.g. a network interface, or the modern equivalent of a disk drive, or a USB, or a user inteface (input or output), or...
Y'know, the kind of thing that a multi-tasking multi-user OS manages sharing and security for, and had done since the Middle Ages, till Gates came along and re-invented single tasking single user computer systems, before turning Cutler's multi-tasking multi-user concepts (in NT) into a pile of pooh with more security holes than a decade's worth of minicomputer or mainframe updates.
When's COBOL due to be re-invented?
Any idea what order of speedup you would get from an individual application if it didn't have to go through all the kernel communication overhead?
I suppose it highly depends on the application profile, i.e. a network IO bound app, waiting for an external service to respond to a rest/database/rpc call will not gain much. Just like those apps (as opposed to CPU/RAM-bound ones) don't benefit from a massive rewrite in optimized C++ from a Ruby/Python implementation.
But what's best case scenario, for which type of app? Or is the gain more achieved from paring down the "OS" to only the services that the app needs available - in terms of RAM footprint/attack surface?
> the developer will never have to understand all the details of technology
Yes because every time we've done that software and systems our modern world relies on have always become that much better, faster & more reliable
As has been mentioned up-thread; Hypervisor, microkernel and now unikernel;
So we deploy a cut down OS(hyper micro uni kernel and run the applications stacks in separate cut down application environments, unit we need to share some stuff between em, because they share components & it seems a waste of resources to deploy it twice, so we need some tools to allow two application environment to interact; Lets call this APC, we can use some shared memory.
But then we need some security controls to mediate between the two application environment to make sure one doesn't do bad thins to another; Oh and some shared disk would be nice to ....
And he-presto were back to a full fat OS.
That said, cutting some of the fat from modern OS kernels wouldn't be a bad thing.
But first I'd like to see Linux drivers moved out of ring-0; that would be a nice start. I'm not saying going full on micro kernel, but I'd like to at least get the USB & Video crap out of the ring-0 kernel