RLY?
“KVM, like other major hypervisors, supports Hyper-V's paravirtualization features,” he wrote.
I have no idea what a hyper-v looks like but it sounds a bit pervy and hence a bit wrong.
The Kernel-based Virtual Machine is making waves. Better known as “KVM”, the open source hypervisor runs Google's cloud and Cisco's using it as the hypervisor for its network function virtualization efforts. It is widely used by OpenStack users while Nutanix uses it to power the Acropolis code it hopes will see its users ditch …
virtualisation is inherently kinky since it involves putting things into places they were never designed to go
Which is referred to as sodomy. However, that generally only applies to anal and oral sex.
Titty fucks are referred to as gomorrahy. They're like sodomy, but in a different place.
Paravirtualization allows versions of Windows targetting paravirtualization to operation more like a container than a VM. Paravirtualization gives Windows some truly amazing features which allow it to have insanely higher VM density than if it were running VMware.
For example, probably the most difficult process for a virtualized environment is memory management. The 386 introduced the design pattern we used today which consist of a global descriptor table (which creates something like a file system for physical memory)... it allows each program running to think it has a single contiguous area of memory to operate in... though in reality the memory can be spread out all over the system. Then there's a local descriptor table which manages memory allocation per process. This is a sick (and semi-inaccurate) oversimplication, of how memory works on a modern PC. But it gives you the idea.
When you virtualize, operating systems which don't understand they are being virtualized need to have simulated access to the GDT (which is a fixed are of RAM) and be able to program the system's memory management unit (MMU) through direct calls to update memory.
There's also the principal of hard I/O on Intel mapped CPUs vs. memory mapped I/O. Memory mapping could always be faked by faking the memory locations provided to drivers. But I/O couldn't be handled without intercepting all I/O functions... sometimes by rewriting the executable code on the fly.
To make this happen, VMware used to either recompile code on the fly to intercept I/O calls and MMU programming. Hyper-V Generation 1 does the same. With the advent of the Second Layer Address Table (SLAT which should actually be called the SLUT ... look-up table, but isn't), memory rewrites and dynamic recompilation of MMU code was no longer necessary. The CPU simply introduced a new descriptor table which works as a higher level than the GDT... or nests it.
The I/O issue needed to be addressed. This started by making drivers for each operating system which would bypass the need for making I/O calls directly and it works pretty well except on some of the more difficult operating systems like Windows NT 3.1 or OS/2. I recently failed to launch Apple Yellow Box in Hyper-V because of this. VMware has always been amazing for making legacy stuff work because they are really focused on 100% compatibility even if it makes everything slower.
Microsoft with Windows and KVM with Linux took the alternative approach which was 1000% better which was to simply say "We'll run whatever legacy we can run... but we focus on today and tomorrow. Let VMware diddle with yesterday". So Linux was modified to run as a user mode application and then later with Docker was modified to run without Linux itself. Windows did kinda of the same thing...
But Hyper-V did something really cool. Paravirtualization works on Windows by running the operating system... kind of as usual. But then replaces the memory manager with one that doesn't absolutely require a SLAT. Instead if it needs more memory, it asks the host operating system for more memory. So instead of wasting tons of memory guessing whether 4GB is enough or not... paravirtualization often gives Windows about 200MB and if it needs more, then it gets more. So paravirtualization makes Windows typically 20 or more times more efficient when run this way. The trade off is that the cross boundary (call from guest to host) is more expensive and can have a negative impact on CPU performance. Also, there is more memory operations in general so the system GDT is likely to be more active and possibly fragmented. I'm pretty sure MS will optimize this further in the future.
Then there was drivers. VMware has been so focused on bare metal and Paravirtualization goes the entire opposite route. It ends up that instead of trying to hardware accelerate every VM operation which can require billions more transistors and hundreds of more watts per server, Microsoft focused instead on allowing guest operating systems to gain the benefits of the host OS drivers by removing the need for hard partitiioning device operations. So, where VMware would simulate or expose a PCIe device to the guest VM, Hyper-V would give drivers to the host which would simply allow it to talk directly (and play nicely with) the host devices.
For storage this offers immense storage improvements in hundreds of different ways. With a Chelsio vNIC or Cisco (as a runner up), storage via SMBDirect or pNFS can reach speeds and performance per watt that are so incredibly far above what VMware offers that the environmental protection agencies of the world should sue VMware for gross negligence for their approach. We're talking intentional earth killing.
For network, the performance difference is almost equally huge, but once you virtualize storage intelligently (RDMA is the only way), then networking becomes easier.
But back to paravirtualization. Here's an example. If you want to share the GPU between two VMs in a legacy/archaic system, you would need to get a video adapter which is designed to split itself into a few hundred different PCIe devices (due to SRIOv chance are... a maximum of 255 devices... meaning no more than 255 devices per VM host... so no more than 255 VMs per host). Then you'd need specialized drivers designed to maintain communication with theses PCIe devices and to allow the VMs to migrate from one host to another by doing fancy vMotion magic (cool stuff really). This has severe cost repercussions... nVidia for example charges over $10,000 per host and requires a VERY EXPENSIVE (up to $10,000 or more) graphics card.
Paravirtualization would simply make it so that if the guest wants to make an OpenGL context, it would as the host for the context and the host would forward it to the VM. The Hyper-V driver then forwards the API calls from the guest app to the host driver directly. This means that you're still limited to however many graphics contexts are the maximum supported by the GPU or driver. But it's more than the alternative. VMware does this for 2d, but since VMware doesn't have its own host level graphics subsystem for 3D, it depends on nVidia to rape their customers. Where in Hyper-V it's free and works on $100 graphics cards.
Storage is HUGE... I can go for pages about the benefits for paravirtualization on storage.
So here's the thing. I assume that KVM will get full support for all the base features of paravirtualization. The design is simpler and better than making virtual PCI devices for everything. It's also just plain cleaner (look at Docker). In addition, I hope that they will manage to integrate the Hyper-V GPU bridge APIs by linking Wine to the paravirtualized driver there.
In truth... if you look at paravirtualization... it's actually the exact same thing which VirtIO does with Linux.
VMware has a little paravirtualization, but to make it work, they would probably need to stop making their own kernel and instead go back to Linux or Windows as a base OS to get it to work completely. They simply lack the developer community to do full paravirtualization.
And BTW.. Paravirtualization is the exact opposite of pervy and wrong. What you do to avoid paravirtualization is precisely the pervy and wrong thing.... but if that's what pervy and kinky is... I'm in. I love that kind of stuff. Paravirtualization is WAY BETTER but legacy virtualization is REALLY FUN if you happen to be an OS level developer.
I have been using VMware where Windows will not run the VM's and is painful. Not sure but the whole VMware stack works well and is simple. Not perfect but hands down better than anyone else. Amazon, eBay, Google and like providers can do whatever they want as they have a whole army of IT staff and developers to make any OS or application just work. The rest of us use VMware. The SMB market? VMware there with the free hypervisor is just too simple compared to the Hyper-V product.
On the desktop, I'm with you. VMware works really well on desktop versions and is no hassles and pain free.
For SMB, to be honest, I've setup VMware for years, I dabbled with Hyper-V here and there and until the Windows Server 2016, I wasn't really happy. But with Hyper-V 2016, it is actually much easier now. Not only that, but on single host or 2 or 3 hosts, it's way easier than VMware today. Install Hyper-V (free of charge... no cost... period), setup basic Windows networking, setup a Windows share for storing virtual machines, setup a windows share for storing ISOs. You're basically done.
Of course, if you want the good stuff (vCenter style), you can save a lot of money avoiding SCVMM and either buying ProHVM for less than an hour of your salary per host. Or 5Nine which is REALLY REALLY AWESOME but I stopped recommending or buying once they removed the prices and purchase links from their website in favor of forcing me to talk to a sales person.
I will say tht VMware is compatible... it's not easy. Truly... I use KVM in many environments and I use Hyper-V as well. I still work very often with VMware and it's always funny how many problems it has which the others don't. But of course, it does run pretty much everything and I REALLY LIKE THAT. So, if I'm playing with old operating systems on my laptop, VMware is the only way to go. If I have to get some work done, then Hyper-V or Ubuntu are the only options.
I recommend you check either of them out again. I think you'll find that if you invest one full day of your life in learning either one, you'll never be able to look at VMware again without laughing at how much of a relic it is.
Nice stuff from the KVM team!
(I just wish that the vSwitch would get some love. Virtualization is useless for high-throughput network loads (audio/video transcoding, or other real time media flows) and quite honestly all the existing workarounds for it suck so badly I run screaming back to non-virtual hardware. 2017 and the best we have is SR-IOV+DPDK so long as you don't mind losing all your security groups?)
KVM supports many more versions of guest OS than HyperV ... look at the supported Linux versions in HyperV .... ridiculous. Besides, I cannot believe it scales better ... or do you mean I can hook up some USB scales to it and 5 minuites watching Windows "installing generic USB device drivers" and a reboot of the VM, the USB scales work better on Windows ? Could not even believe that ...
This time the joke icon gets some love, though the list of supported guest systems in HyperV is a sad joke ...
"KVM supports many more versions of guest OS than HyperV ... look at the supported Linux versions in HyperV "
Because no one sane uses anything other than a mainstream / enterprise Linux version in the enterprise....
Hyper-V supports all of those.
". Besides, I cannot believe it scales better "
Why not - it's got far more advanced functionality - and it's a proper standalone hypervisor - not relying on a complete underlying OS like KVM. And anyway there are numerous benchmarks that show that it does.
VMS Software Inc. has announced the release of OpenVMS 9.2, the first production-supported release for commercial off-the-shelf x86 hardware.
The expectation is that customers will deploy the new OS [PDF] into VMs. Most recent hypervisors are supported, including VMware (Workstation 15+, Fusion 11+ and ESXi 6.7+), KVM (tested on CentOS 7.9, openSUSE Leap 15.3, and Ubuntu 18.04), and Oracle VirtualBox 6.1.
For now, there is a single supported [PDF] model of server for bare-metal hardware deployments: HPE's DL380.
The Xen Project has delivered an upgrade to its hypervisor.
Version 4.16 was announced yesterday by developer and maintainer Ian Jackson, capping a nine-month effort that saw four release candidates emerge in November 2021 prior to launch.
The project's feature list for the release celebrates the following additions as the most notable inclusions:
Microsoft has patched the patch that broke chunks of Windows and emitted fixes for a Patch Tuesday cock-up that left servers rebooting and VPNs disconnected.
There was a time when out-of-band updates from Microsoft were considered a rarity. Not so much these days. On the receiving end of the company's attention were Windows desktop and Windows Server installs left a little broken following Microsoft's latest demonstration of its legendary quality control.
KB5010793, KB5010792, KB5010790 and KB5010789 were slung out for Windows 10 and Windows Server. Even Windows 7 and Windows Server 2008 R2 got some love with KB5010798 and KB5010799, such was the blast radius of last week's whoopsie.
Updated Microsoft's first Patch Tuesday of 2022 has, for some folk, broken Hyper-V and sent domain controllers into boot loops.
A Register reader got in touch concerning KB5009624, which they said "breaks hypervisors running on WS2012R2."
"I'm currently dealing with this right now and it's a hassle," our reader said.
Security vendor Bitdefender has open-sourced its hypervisor introspection technology, which the Xen Project will adopt as a sub-project.
Hypervisor introspection (HVI) makes it possible to inspect the memory of a guest VM, a desirable thing to do if you are hunting for malware infections in the guest.
Xen and Bitdefender have collaborated around this sort of thing since at least 2015 when the open-source hypervisor added a feature, libbdvmi
, that Bitdefender helped to develop. Citrix and Bitdefender later commercialised the technology in Citrix's version of Xen.
The Xen Project has ported its hypervisor to the 64-bit Raspberry Pi 4.
The idea to do an official port bubbled up from the Xen community and then reached the desk of George Dunlap, chairman of the Xen Project’s Advisory Board. Dunlap mentioned the idea to an acquaintance who works at the Raspberry Pi Foundation, and was told that around 40 percent of Pis are sold to business users rather than hobbyists.
With more than 30 million Arm-based Pis sold as of December 2019, and sales running at a brisk 600,000-plus a month in April 2020, according to Pi guy Eben Upton, Dunlap saw an opportunity to continue Xen’s drive towards embedded and industrial applications.
Biting the hand that feeds IT © 1998–2022