Re: Existing microvms
When I was reading this I thought of containers too. Not sure what the author's point is, but from someone who has been using VMware for 25 years(and Linux for 28 years) this concept doesn't sound useful to me. One of the big points of VMs is better isolation, I want local filesystems, local networking, etc in the VM. If I don't want that overhead then I can/do use LXC which I have been using for about 10 years now(both at home and in mission critical workloads of stateless systems at work), never been a fan of docker-style containers myself.
When I think of a purpose built guest for a VM it mostly comes down to the kernel, specifically being able to easily hot add and more importantly hot remove CPUs and memory on demand (something that VMware at least cannot do). I think I have read that HyperV has more flexibility at least with Windows guests and memory but not sure on specifics. Ideally having the OPTION (perhaps a VM level config option) that if say for example CPU and/or memory usage gets too high for too long *AND* there is sufficient resources on the hypervisor, that the guest can automatically request additional CPU core(s) and/or memory, then release those after things calm down. I believe in Linux you can set a CPU to "offline"(have never tried it so unsure of the effects, if any), but still can't fully remove it from the VM in VMware(at least, unsure about HyperV/Xen/KVM) without powering the VM off.
Side note Linux systems I guess can freeze if you cross the 3GB boundry hot adding memory so VMware doesn't allow you to go past 3GB if you are below 3GB, which is a bit annoying which means if you built a VM with 2GB memory and want to hot add to 4GB it requires the VM be shut off, so fixing that would be another nice thing for a purpose built VM guest OS too.
Most distro specific issues especially hardware drivers of course are basically gone in VMs. I spent countless hours customizing Red Hat PXE kickstart installers with special drivers because the defaults didn't include support for some piece of important hardware, the most problematic at the time (pre 2010) was probably the Intel e1000e NIC as well as Broadcom NICs too sometimes(and on at least one occasion needed to add support for a SATA controller). Can't kickstart without a working NIC.. but wow the pain of determining the kernel, then finding the right kernel source to download, compile the drivers, insert them into the bootable stuff, I think that is the only time in my life that I used the cpio command. Intel had a hell of a time iterating on their e1000e NICs, making newer versions of them that look the same, sound the same, but only work with a specific newer version of the driver.
Exception may be windows on the drivers front, I've installed a bunch of Windows 2019 servers over the past year in VMs, and I have made it a point to attach TWO ISO images to the VM when I create it, the first ISO is for the OS itself, and the 2nd ISO is for a specific version of the vmware tools ISO that has the paravirtual scsi drivers on it (newer versions of the ISO either don't have the drivers or they didn't work last I checked). Just so I don't have to mess around with changing ISO images during install. Don't have any automation around building windows VMs as I'm not a windows person, but have quite a bit around building Ubuntu VMs. So strange to me that MS doesn't include these drivers out of the box, they've been around for at least 10 years now. Not sure if they include VMXNET3 drivers, I don't need networking during install, and installing vmware tools after install is done is the first thing I do which would grab those drivers.
I never touched Plan 9 I don't think, but the name triggered a memory of mine from the 90s when I believe I tried to install Inferno OS(and I think I failed, or at least lost interest pretty quick) https://en.wikipedia.org/wiki/Inferno_(operating_system) "Inferno was based on the experience gained with Plan 9 from Bell Labs, and the further research of Bell Labs into operating systems, languages, on-the-fly compilers, graphics, security, networking and portability."
Perhaps someone who knows more(maybe the author) could chime in why they are interested in Plan 9 and not Inferno, as the description implies Inferno was built based on lessons learned from Plan 9, so I assume it would be a better approach at least in their view.
I dug a little deeper into Inferno recently and found what I thought was a funny bug report, the only bug report on it for github, for software that hasn't seen a major release in 20 years(according to wikipedia anyway)
https://github.com/inferno-os/inferno-os/issues/8 the reporter was suggesting they update one of the libraries due to security issues in code that was released in 2002. Just made me laugh, of all the things to report, and they reported it just a few months ago.
side note I disable the framebuffer(?) in my Linux VMs at work by default
https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-security/GUID-15D965F3-05E3-4E59-9F08-B305FDE672DD.html
if you do that you need to update grub(these are the options I use,I suspect only the nofb and nomodeset are related to the change):
perl -pi -e s'/GRUB_CMDLINE_LINUX_DEFAULT.*/GRUB_CMDLINE_LINUX_DEFAULT\=\"spectre_v2=off nopti nofb nomodeset ipv6.disable=1 net.ifnames=0 biosdevname=0\"/'g /etc/default/grub
perl -pi -e s'/^#GRUB_TERMINAL/GRUB_TERMINAL/'g /etc/default/grub
if you don't do that in grub you'll just see a blank screen in VMware when the system boots.
There has been at least one security bug in vmware related to guest escape and the framebuffer or something over the years(maybe just this https://www.blackhat.com/presentations/bh-usa-09/KORTCHINSKY/BHUSA09-Kortchinsky-Cloudburst-PAPER.pdf) so I figure disable it since I don't need it anyway.