back to article Benchmark a cloud PC? No way. Just trust us, they work, says Microsoft

Microsoft has offered guidance for those trying to benchmark cloud PCs: don't bother. A Tuesday post by Ron Martinsen, senior product manager for Windows 365 Cloud PC and Azure Virtual Desktops, opens with an account of his 28+ years of experience at Microsoft. During that time, he reminisces, he was "involved in countless …

  1. Mark 65

    That they are espousing server performance whilst waving the hands at VDI performance tells me they are greatly over-provisioning on the desktop front.

    1. MatthewSt

      Which is a shame really, because if they were letting you choose the overprovisioning it would be better all round! I don't need 2 CPUs per user, I can probably get away with 0.5 per user in general, as long as they can use the others when needed. Let me share 16 CPUs across 32 users!

      1. quxinot

        They'd be happy to do that.

        Of course, you'll need to pay a per-user cost. And probably a per-CPU cost in addition. And an overprovisioning cost.

  2. Screepy

    Benchmarks are important sometimes..

    We spent 2022 moving a big SQL data warehouse into Azure and onto some of their beefier VM options.

    For the next 6 months we struggled with performance. Data cube analysis that would take us 5-6 hours on our previous on-prem servers was now taking double that.

    MS had helped us spec the cloudy VMs - we provided our on-prem specs and they provisioned us with the VMs that would supposedly outperform our older system.

    Not even close. Our SQL DBAs spent days/weeks trying to tune the system but we just couldn't get anywhere near our old system. We even bumped the cloud VMs up a couple more performance tiers (which completely wiped out the planned budget for the system) but still had issues.

    Back and forth we went with MS support until they eventually said that the setup you was running as optimally as it ever would.

    So, I had to rebuild the system on-prem again and we migrated back (what fun that was).

    During the switch back we had a couple of days downtime.

    DBAs and I took the opportunity to absolutely smoke both systems with some benchmarking tools. The on-prem kit was so much better it was a joke. And we're not talking high-end kit here. Midrange hybrid Nimbles as storage layer, with good, but not amazing Aruba switches and a well oiled but nothing special VMware layer running the VMs.

    The main weakness that we spotted on the benchmarking for the cloud stuff was disk IO, it just couldn't get anywhere close to the on-prem Nimbles.

    1. Korev Silver badge

      Re: Benchmarks are important sometimes..

      A friend had a business who self-built some consumer-level PCs for running their tests. They got taken over by a company who'd standardised on Azure, apparently to match the speed of the PCs they built for a three figure sum, their new corporate overlords had to spend a five sum in Azure!

    2. Anonymous Coward
      Anonymous Coward

      Re: Benchmarks are important sometimes..

      Cloud is such a con it's depressing that CEO's CIO's etc fall for it. The big consultancies have a BIG case to answer for it and they'll double their cash when you start moving load out of the cloud.

      The only use cases for Cloud are small firms with no in house IT Talent or firms where load is constantly moving up and down. If you have a stable and predictable growth/fall rate, there is no way Cloud is for you. It's going to be more expensive, slower, higher risk AND just more of a pain in the arse to deal with. Infrastructure should NOT be Agile & definitely the management tools should NOT be changing almost weekly in their look and feel. Software driven infrastructure brings the madness you see from developers who want shiny shiny new tools every 5 minutes, but if you're running infrastructure, you want consistency so if things go pear shaped you're not spending 1/2 your time trying to find out where all the buttons have moved because a developer somewhere watched a Marvel movie and decided changing the interface to look like something they saw there and thought it would be cool.

      Stick your stuff in a grown up data centre. Get some GOOD IT guys and you'll be spending 1/2 what your cloud costs are with a MUCH more stable infrastructure and no danger of a Dev 1/2 way across the world leaving an S3 bucket open to the internet because it's more convenient to their dev cycle to have ANY ANY EVERYONE access to it

      Not anonymous because my hate for the consultancies is well known

      1. Korev Silver badge

        Re: Benchmarks are important sometimes..

        I mostly agree, but not so much with The only use cases for Cloud are small firms with no in house IT Talent

        Running stuff in the Cloud still requires skills, albeit it different ones. You obviously don't need the hardware people, but you still need people who know how to spin up VMs etc. The risks of getting things wrong are debatably higher too, as the number of "hacks" on S3 buckets shows.

        1. ChoHag Silver badge

          Re: Benchmarks are important sometimes..

          The last all-Amazon place I was at didn't even know they had VMs.

          It's all serverless, right?

        2. doublelayer Silver badge

          Re: Benchmarks are important sometimes..

          "The risks of getting things wrong are debatably higher too, as the number of "hacks" on S3 buckets shows."

          This is more just standard failure to do the obvious security, and in AWS's case, they've changed defaults to try to help with that problem. If someone doesn't check that their buckets aren't publicly accessible, they may not be checking whether their internal files on the public server can be accessed without authentication. If you intend to shoot yourself in the foot by not checking any configs, you can do that pretty easily on cloud or off it. Competence to at least check the basic things is unfortunately not universal, and if someone thinks that switching to cloud or avoiding cloud will help with that, they're looking in the wrong direction.

          1. Ex-PFY

            Re: Benchmarks are important sometimes..

            You saw MS release a doozy last week, BingBang? Failing to correctly configure their multi-tenant apps due to a single checkbox. The interface could be to blame, or the MS employee, or the sense-defying mind-twister that is relying on human memory and obscure documentation to get variable configurations correct every time. The hilarity was the same-day release of cloud security advise for non-MS clouds...


          2. Anonymous Coward
            Anonymous Coward

            Re: Benchmarks are important sometimes..

            "If you intend to shoot yourself in the foot by not checking any configs, you can do that pretty easily on cloud or off it. "

            False. On-premises you have network people (not developers) to take care of external security. In a cloud there's no such thing and that makes a world of difference.

            1. doublelayer Silver badge

              Re: Benchmarks are important sometimes..

              This is exactly the flawed assumption I was talking about. Your comment assumes two things:

              1. "On-premises you have network people [...] to take care of external security."

              2. "In a cloud there's no such thing [as network administrators]"

              Both are dependent on the administration of the network. There are on prem deployments that are missing administrators to handle situations or where the administrators are not competent to handle security issues. Just having the servers in a building you own doesn't make those admins pop into existence, nor does it automatically train those admins in making sure the systems are on secure networks rather than just being available. There are admins who are good at plugging in network cables and assigning static IPs who don't understand which things need to have public access and which don't, and when those people decide on the firewall rules they often leave them more open than they need to because nobody is complaining when that is configured. If you choose to deploy those servers in the cloud, it doesn't auto-fire those administrators, nor does it force you to have a policy that prevents them from applying secure standards.

              Your comment contains all the important details and somehow still misses the point. Wherever you put your resources, you should have someone who understands how to deploy them securely and is empowered to make sure it has happened. Leave them out and your on prem situation will not save you. Include them and your cloud situation can be secure. This is the same logic that was used by cloud salespeople who said that, because they were big companies with a lot of security people, your deployments would be safe if you just moved them to their resources. That sales pitch was wrong, and so is it's exact opposite where on prem deployments are automatically more secure. Nothing is automatically secure.

    3. Anonymous Coward
      Anonymous Coward

      Re: Benchmarks are important sometimes..

      I'm having to constantly push back at our MSP that we don't want to fully move everything to Azure. Its still cheaper on-prem. Getting sick of the pushing.

    4. A Non e-mouse Silver badge

      Re: Benchmarks are important sometimes..

      I've got a small niche piece of software that needs a full-fat MS-SQL server. I took a look at running the SQL in Azure as I didn't want to have to manage a MS-SQL server. I think the cost break-even of on-prem Vs Azure SQL was 13 months. No brainer.

    5. Anonymous Coward
      Anonymous Coward

      Re: Benchmarks are important sometimes..

      I've seen the same moving to a fully virtualised environment, especially with virtualised "general purpose" storage (being what the customer had wanted to pay for) causing variable read times for different blocks of data. You pay for convenience and sometimes it's worth the trade off.

  3. ChoHag Silver badge

    > "Microsoft's goal is to ensure that all Cloud PCs meet expected service level agreements and provide a consistent user experience across different configurations," he wrote.

    Microsoft's user experience has been consistent since their very first days over all of their devices. I have unshakable faith in their ability to carry on unchanged while their pundits to continue to bury their noses in the trough.

  4. trevorde Silver badge

    Fast as the slowest link

    Having to use a VM for dev in my current job. The justification was that the VM was located next to the data lake (data swamp), so data access would be a lot faster. What they didn't say was that the 2 cores of an 8 year old (yes, 8 *years old*) CPU allocated to my VM would struggle to run, well, virtually everything. It is literally half the machine I had in my previous job.

    1. Anonymous Coward
      Anonymous Coward

      Re: Fast as the slowest link

      'data lake' is a bad word: Typically it means cheapest possible hard drive subsystem at the end of one 1000 Mbps cat5e because that's the cheapest way to do it and "customers will never know."

      Then you get >30ms latency for *anything* and that murders the performance of any OLTP database. Even if the 'drives' themselves were SSDs.

      That's why cloud is OK only for low performance applications or needs totally overblown hardware to do anything fast,

      It is good if you have stand-alone web server and you need to add 6000 of them fast. But for permanent data storage it's really bad: Even if you can add 6000 database servers in a minute, it won't speed the disk subsystem an iota and all of those new servers just cost money, while waiting for the disks and idling.

  5. Potemkine! Silver badge

    "when you run the same test repeatedly under identical conditions, you get results that might not be an exact match each time."

    This is a wrong argument, unless the results varies up to 100%, something I doubt. Not getting _exactly_ the same result is not a problem if the variation stays in a limited boundary, let's say +/- 5%

    What is important is to have an order of magnitude to compare the systems. Claiming it is not interesting is dubious, if not just a pure marketing BS.

    1. Korev Silver badge

      If the cloud VDI VMs were significantly faster than a "traditional" setup then I'm pretty sure there'd be Apple-esque hubris all over the place

    2. doublelayer Silver badge

      Exactly, or even a variation with a predictable floor. For example, if you're sharing desktops on a big server, then you could theorize variation where you get more of the power of that server if your desktop happens to be the only one or one of few operating at a certain point. This would be a nice bonus during low-utilization times, and it would mean that benchmarks taken then wouldn't be reflected when a more typical load was using the resources. However, you'd still need some agreed lowest level of performance so you know what you're paying for. Microsoft's offering doesn't appear to have that.

      I'm also still not sure what the point of the desktops are when you're running them from a machine that's likely as powerful. Using cloud for more powerful servers or ones that need better connections to a lot of data makes sense. Using them for user desktops isn't efficient since you'll still be providing hardware to the users that could be doing the job. I don't think there will be much savings in downgrading the hardware to be a thin client.

  6. Pascal Monett Silver badge

    "dependent on the startup order of applications"

    Well, if you had bothered making an OS that actually starts applications up in the same order every time, then this point would be moot, wouldn't it ?

    But no, Windows does whatever the hell it wants every time, so obviously now you have a good excuse.

  7. Caver_Dave Silver badge

    "... you get results that might not be an exact match each time."

    "when you run the same test repeatedly under identical conditions, you get results that might not be an exact match each time."

    Try a deterministic real-time operating system.

    You get identical results every time!

    I can get deterministic results from Windows and Linux running (concurrently) over a deterministic real-time hypervisor.

    And it you think that I'm not talking about everyday applications, just look at all the 5G OpenRAN implementations around the world running on commodity hardware and RTOS.

    It is "horses for courses", and that statement, by someone who should know better, should have been caveated.

  8. Baudwalk

    Azure's paper specs...

    ...are kinda useless.

    I rented a VPS on Azure for a couple of months for running a Minecraft server for my son, as I already knew how Azure admin worked. The price was OK, but I spun it up on demand and had it auto shutdown each night.

    Then I noticed Hetzner weren't as expensive as I had assumed, and I could get a VPS with the same specs as the Azure one.

    While the servers had the same specs, the Hetzner server has consistently been much, MUCH more responsive than the Azure one at the best of times.

    Updating new areas on the Azure Minecraft stalled regularly. On Hetzner it's always buttery smooth.

    And while my total monthly cost is the same, the Hetzner server is kept running 24/7. That many hours of up-time on Azure would have been horribly expensive.

  9. Kiss

    Missing the point

    The purpose of Azure PCs is to destroy Citrix, VMware and other virtual desktop offerings by undercutting those products so their owners cannot invest any longer and leave the market to M$ to prey upon.

    The comments are completely aligned to this strategy. M$ are simply placing doubt in the minds of companies that the better performing products from competitors don't mean anything. Wake up and smell the lilly of death of innovation in virtual desktops.

  10. Anonymous Coward
    Anonymous Coward

    ""when you run the same test repeatedly under identical conditions, you get results that might not be an exact match each time.""

    Irrelevant as you can still use statistical analysis for the results. And it's not run 5 times, it's run 100 or 1000 times. This guy obviously has no idea how performance is measured.

  11. ecofeco Silver badge

    This is suppsoed to inspire confidence?

    "it's nearly impossible to get repeatable data in an environment that reflects the reality of what users will be using."

    So it's not reliable. Got it.

  12. Henry Wertz 1 Gold badge


    Sure, trust us...

    So, everything they say is true, to some extent...

    1) You won't get the same results from one run to the next (probably, since there will be other users on the system.)

    2) They'll upgrade hardware from time to time.

    But... obviously (as others have pointed out) you can run benchmarks anyway and get a general idea of performance.

    From running Linux and Windows VMs myself (locally), Windows straight-up generates an insane amount of disk I/O, like 10-100x the amount a Linux VM would; my suspicion is you get access to the amount of CPU power you are supposed to but the storage system is being absolutely hammered and so the disk latency is high.

    I note here, Amazon has setups where the SLA allows for up to a *15 second* latency on storage access! Not that it's usually anywhere near that high, but getting like 14.9 second latency on your disk access would needless to say result in poor performance, but Amazon would be able to say "Welp, it's within SLA so it's working as intended."

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like