
Maybe a dumb question... If you're so concerned about performance that you need two thousand CPUs and a couple of hundred TB of RAM wouldn't you be running on bare metal anyway, why is this necessary?
Microsoft has announced new scalability ceilings for its Hyper-V hypervisor. "Please read carefully. These are not typos," wrote Jeff Woolsey, Microsoft's principal program manager for Azure Stack HCI, Windows Server and hybrid cloud. That warning came in the first of a series of Xeets in which Woolsey revealed that VMs under …
That depends on your scenario. From 2048 CPUs you can spare one, four or sixteen for coordination and preparing the threads, and you'd still have 99.99% scaling efficiency. The OS itself has no problem, due to some famous optimizations which came with Windows 7 which were improved with every subsequent Windows version (hey Windows kernel coders, you do an amazing job! Lend some of your powers to the UI and Teams guys!).
I always post as AC as I have a clause in my contract that states I cannot discuss my employer or clients. Even if I don't mention names it may be possible to infer something from one of my posts. Rather than trying to decide which posts need to be anonymous or not I just find it simpler to post everything anonymously.
Are you sure? This page implies that the OS is virtualised (albeit with direct hardware access).
"In addition, if you have Hyper-V enabled, those latency-sensitive, high-precision applications may also have issues running in the host. This is because with virtualization enabled, the host OS also runs on top of the Hyper-V virtualization layer, just as guest operating systems do. However, unlike guests, the host OS is special in that it has direct access to all the hardware, which means that applications with special hardware requirements can still run without issues in the host OS."
A fair question, but the answer is 'portability'. The only glimpse the guest OS has of the real hardware is the CPU make & model. The rest is all an abstraction, so at any point in the future you can shift to a new/different system (even with wildly different hardware) and the VM will be blissfully unaware. You don't need to hope and pray that 1) the OS doesn't complain about X and Y changing, or 2) the admin updating config items doesn't miss one.
Because at some point in the future, this will likely be considered small. It was within living memory that 1MB of RAM was considered massive, or a 10MB HDD was considered to be like having a McMansion of storage.
So, instead of making small tweaks every couple of years, they just did about a decades worth in one go.
I think you missed the point. I am well aware that Azure runs on a customised version of Hyper-V. My point was that this news article is about Hyper-V capacities within Server 2025. If you were running a monster VM like that on Azure you would be running it directly on Azure (which has been able to handle monster VMs for some time), not using nested virtualisation within Server 2025.
This is Microsoft's way of trying to remind people they still make a hypervisor, same way they beg you not to download Chrome when it's the only thing you use IE/Edge for. Most well-heeled Microsoft-y admins pay VMWare today for *good* virtualization, but with Broadcom pissing on everyone now, they need another easy button since being allergic to Linux usually. Anyone that isn't afraid of Linux will just use KVM or KVM-based solutions.
Not any more. They switched to core count, a year or two ago. I can still remember our sales guys pushing our customers to buy before Microsoft "optimizes" their licensing again. And before that there was an option to have it cheaper with dual 8-core CPU. And before that your Server 2016 scheme applies. And before that <error stack exceeded>.