If/when AMD produces an ARM PC CPU, it's easy for them to include an x86 for legacy programs. So then no need for x86-emulation, I think.
BTW: Windows on ARM ... Ive got it running ... on a Raspi 4. Installing was easy.
Intel's stock dipped slightly on Monday after a report that Nvidia was developing an Arm-based CPU for the PC market. Citing unnamed sources, Reuters reports that Nvidia has quietly begun designing CPUs based on an Arm architecture specifically to run Windows. The development is apparently part of a broader effort by Microsoft …
I wonder why it's not popular then if it works and run fine. Is it just the OS which runs fine but running any applications requires lots more RAM than is typical of an ARM solution today?
Or is there some other reason why many are not running power sipping ARM boards aren't running Windows all over the place?
I remember DEC were doing that in the '80s; I think the Vax 8650 from nearly 40 years ago was their last CISC processor, subsequent Vax CPUs all used microcode, and of course Prism and Alpha were pure RISC from the outset. Intel's "settlement" with DEC is possibly where they acquired the technology.
All these companies wanting to compete for the basically nonexistent 'Windows PCs running on ARM' market?
I think they see Qualcomm investing in it and they figure "we better jump in in case the market develops" because it hurts your stock price less to blow a few billion creating products no one wants to buy than to be seen as "behind" by Wall Street analysts.
Windows 11 allows x86 an x64 apps to run under Windows for Arm.
I've not tried it yet myself, but that apparently works ok running Windows Arm under Parallels on MacOS.
Agreed. A lot depends on Microsoft adding in something akin to Rosetta 2, if the "I don't care if its ARM or Intel" aspect is going to extend to a large fraction of the Intel Windows market.
They're reasonably well placed for this; in Windows 11, applications run in mini VMs hosted on top of HyperV. It wouldn't be too hard to add a translation layer at that point, and it's then not too much of an extension to store the "translated" VM, much as Rosetta 2 stores the Intel binary pre-translated to ARM op codes. A lot of the tech for this is just lying around the place as OSS - QEMU for one - though I suspect Microsoft would prefer to write their own. The implication of what's there at present in Windows 10/11 is that the translation is on the fly; shouldn't be too hard to store it. There are potentially some licensing issues - some EULAs prohibit "translation" of software into another form, which is exactly what Rosetta 2 or any other emulation is doing. Apple appear not to be getting sued, so perhaps MS would get away with it.
I think Microsoft may eventually do this, if it becomes clear that ARM is the future. They've got a pretty good record for supporting software long term. Good support for Intel binaries on ARM would simply be an extension of this.
It was a terrible shame that Windows Mobile didn't gain enough ground to survive. By the time they'd finished it, it was technically pretty good. The software ecosystem was the problem Better still, MS had defined a hardware standard for it, so anyone (for some measure of "anyone") could make a phone and trivially get Windows Mobile running on it. Much like a desktop PC is a hardware standard that originated in IBM and molded by Microsoft later in the shape of PC System Design Guides (Wikipedia). The implication was that, if the hardware was standardised to boot Windows Mobile, it was also standardised to potentially boot something else altogether (like an Ubuntu distribution).
That openness of hardware was a vast improvement on the closed proprietary designs that dominate today, and it's a real pity that it died along with Windows Mobile.
I know that they have a x86/64->ARM translation layer for some time, but the implication is that it doesn't store the translated result long term, so far as I can tell, unlike Rosetta 2.
MS's own docs says that it works, but you're better off rebuilding for ARM to get native performance. That's what makes me think that the translation is done Just In Time. A stored x86/64->ARM translation should perform more or less as if it had been rebuilt for ARM and run with native performance. It is, after all, effectively a rebuild using Intel opcodes as source code...
It is indeed a rebuild using Intel opcodes as source-code, but that’s a shitty place to start from. Speaking as someone who once upon a time ported assembly-language programs from x86 to 68000, I can tell you you really are working with the minimum of information at this level - and I had comments! Sure, there are idioms in one ISA that have better equivalents in another and those can be swapped mechanically, but most of the time, the best way to “port” a function is to completely rewrite it to take advantage of the new chip’s instruction set.
Without access to the source-code, or at least the intermediate compilation product (“bitcode”), it’s really difficult for any translator to produce optimal code, because it has almost no information about the larger-scale intent of the code it’s examining, because by this point lots of optimisations will have been made that obscure that intent - compilers are not designed to produce machine-code that’s understandable, only small and fast. Yes, the result of mechanical translation will be functionally correct, but it won’t be anywhere near as time and space efficient as taking the original source-code and recompiling for the chosen architecture.
Apple has some very clever people working for it, and this is its third go at doing this (68k->PPC, PPC->Intel, Intel->Arm), but the engineers who understand the technology and its limitations do not write the press-releases about it. (I’d have loved to see the guys I knew at Apple writing the public releases for their stuff, but it might have broken several profanity filters)
No, Microsoft shipped a buggy 32-bit only emulation with Windows 10. Only Windows 11 for ARM does x64 emulation.
Windows 11 arm is running smoothly on my M1 Mac, and the proprietary x64 software does not know there isn’t any Intel inside. Hardcore Windows users, of course, will not be convinced: they just love Intel.
Let's remember that for over a quarter of a century- since the Pentium Pro and Pentium II came out- all Intel "x86" CPUs (*) have themselves consisted essentially of an on-the-fly translation layer wrapped around a non-x86, RISC-like (**) core architecture, with x86 instructions being converted to RISC microcode.
(*) I'll assume something similar applies to AMD.
(**) Some have disputed the characterisation of the core design is RISC, but regardless, the relevant part is that it *isn't* natively x86. (And anyway, since that's an internal implementation detail, they could have changed completely it one or several times since then, so long as the chip maintains x86 compatibility).
Microcode is nothing new. I’m open to correction, but I believe the Motorola 68000, launched in 1979, was the first CPU to break the link between the native CPU instruction-set and the instruction words stored in memory. After that, you can run the chip internals whatever way you want.
But: CISC and RISC apply to the instructions stored in memory, not how the core works. Your core could have a strict RISC-style load-store architecture, but the overall CPU is still CISC if you allow instructions like “ADD (store-to-memory-address),(fetch-from-memory-address)” as part of the instruction-set. The performance limitation is due to those complex instructions requiring memory-access at time of execution; it doesn't matter how you break it down.
I'm aware that microcode itself goes back many decades, long predating the Pentium Pro/II or even the 68000.
And- as the footnote acknowledged- the point wasn't really whether the core was actually RISC or not, it was that it definitely *wasn't* native x86.
The point I was making- in the context of what I was replying to- is that almost every bog-standard, vanilla "x86" PC has already been effectively converting the full x86 instruction set into a completely different one, on-the-fly, day-in, day-out, under our noses for over two decades now, albeit in a completely user-transparent manner.
My point was that the core microcode is shaped by the external ISA; it is not neutral.
Inside the CPU is the worst place to try perform optimisation, and you will not get a core that was designed to perform well with the ARM ISA do run code compiled for x86 at any kind of acceptable performance, no matter how well you translate the instructions on the way in. x86 and ARM are about as opposed as you can get when it comes to design of an instruction-set, so it would be very hard to optimise for one without penalising the other.
If Intel wanted to implement the ARM ISA in a product (as they can: they hold the same class of ARM architecture licence as Apple does), doing so by just changing the microcode translator on one of their existing cores would not produce good results. Similarly, building an “x86” workalike by putting an x86 ISA translator in front of an ARM core would not be better than just using a real ARM and recompiling the code.
It does.. Even making a fairly decent stab at gaming through Steam. It's not going to run the latest Call of Duty or any recent AAA game well, but games that are a few years old (such as Just Cause 4) run fine. I was actually very impressed to run JC4 at a good frame rate (can't say for certain what it was as I didn't have anything running to measure the frame rate, but it was smooth) at 1440p. Especially considering it was running an emulation of an x86-64 cpu and the Macbook does not have a dedicated GPU, or support external GPUs.. And I was running it on battery.
Had to take it off though. It is a work laptop, and while I am an administrator on the machine, we do have software running on those machines that could have reported what I was running.
the basically nonexistent 'Windows PCs running on ARM' market
There are a lot of different segments of the Windows market, but there are perhaps two significant to this particular argument.
There are casual users who are mainly using Windows for browsing and Office.They probably wouldn't notice if the processor architecture in their next PC were to change, except that it might be marginally cheaper. However, they probably wouldn't notice much more if Microsoft simply put a Windows-like skin on Android and shipped that instead: after all, the Windows UI seems to change gratuitously from version to version already. In either case, there are only a few significant applications involved and Microsoft owns them, so porting is (merely!) a technical issue.
Then there are users who depend on backward compatibility for software they already have. Here it's much more difficult: Windows on ARM would have to reproduce all the accumulated cruft of Windows' deprecated features and there's probably no feasible way of doing that short of emulation using the existing code base.
Apple's architecture changes have always had two advantages: the relatively small software base (as in the first case above) and a significant increase in hardware performance to (mostly) absorb the compatibility overhead (as you would get in the second case above).
I don't presently see there being a sufficient performance boost from a Windows ARM transition that emulation for legacy code would be painless nor that it's worth the effort for casual users. And if the future is that Windows is simply a set of remote windows viewing a cloud desktop then perhaps it doesn't have t exist as a consumer product at all and Microsoft can have two separate operating systems for two separate architectures of cloud machines for as long as is necessary.
Apple’s transition was easy because there’s so little Mac-native software. A lot of desktop Mac software from big companies is Electron-based, so once you get a native Javascript engine, there’s very little penalty. Add to that, the developers of the OS-native software apps are also very responsive (there’s a lot of one-man-bands still working in Mac software) and are very good for adopting whatever the latest OS features are.
The other big plus for Apple is that its macOS platform has virtually no gamers, who are the most outspoken and demanding users of personal computers. That’s the biggest problem for Windows on ARM: even for ultraportables that will never be used for the purpose, the media outlets serving the Windows ecosystem insist on benchmarking laptop performance using suites of top games.
Against that, when you take the gamers out of the picture, most Windows users are very low demand - even less so than the Mac crowd (who have a disproportionate share of video users): most Windows laptops get used for web and office-productivity, and for office work, emulation overhead doesn’t matter. For web, once you have an ARM-native browser, you neuter the biggest problem, so for your typical user will only see the battery-life improvements.
While I begrudge giving one outfit hegemony over the market, so far their actions as benevolent dictators, supporters of linux and resistance to XBox Game Pass have largely been favourable.
EPIC are reportedly losing money faster than they are making it, despite having a license to print money in the form of Fortnight and so it's a risky knife edge to occupy.
The less said about the current state of EA and Origin, perhaps, the better...!
Or maybe Microsoft has approached them with a few million dollars to pay them to do it because a whole shit load of developers are growing up using Linux on ARM in robotics around the world and that has been leading to Linux on their Windows based PCs. You do know that Microsoft knows exactly the number of times you run WSL and which applications you are running right?
My guess is that it's Microsoft funding this move by NVidia and AMD. Who knows, maybe it'll be originally only for gaming PCs with a new Windows OS and game engine optimized for ARM with NVidia and AMD GPUs. Either way, follow the money and it'll probably lead to Redmond.
> All these companies wanting to compete for the basically nonexistent 'Windows PCs running on ARM' market?
> I think they see Qualcomm investing in it and they figure "we better jump in in case the market develops"
The market hasn't developed because Qualcomm has an exclusivity deal for ARM devices officially supported by Windows. That deal expires in 2025 at which point teams Red and Green can sell their ARM based chips that also have decades of GPU know how baked in which addresses one of the big shortcomings of Qualcomm's SOCs. I would not be terribly surprised if the Windows on ARM market overtakes the Windows on x86 market by the end of the decade.
This would be a way to convince a lot of people, and especially companies, to upgrade their fleet of systems.
Think about it from the perspective of a Fortune 500 company that likely has a fleet of tens of thousands, maybe hundreds of thousands, of computers. If employees are working in one of your offices, you're footing the bill for all that electricity. If you can save even 5-10 cents, per workstation, per month, on your utility bill... spread out over tens, or hundreds, of thousands of workstations and that's a not insignificant sum of money. They may never pay for themselves, but if you have to replace the workstations every few years anyway, you can certainly make them a lot cheaper on a TCO front if they're more power efficient.
I'm fairly sure they didn't take off because Intel ruled supreme in terms of performance per $, and when they didn't they soon caught up and overtook again.
That's not really been the case for a while now, so perhaps there's a chance that another architecture could sneak in there and gain traction.
They'll have to hurry though, Intel are themselves now using TSMC so their chips will once again gain parity (at least on silicon process node). Much is made of ARM's power frugality, but that doesn't really impact desktop users, and so long as an Intel laptop can do a working day on a battery charge there's little gained in lasting two. So there is a good chance Intel will become good enough / cheap enough to last.
But Intel are currently totally reliant (in the desktop / laptop world) on MS not making an effective equivalent of Roesetta 2 for Windows, because if that happens then for most people the Intel / ARM thing won't matter at all and an ARM machine is likely cheaper.
With respect to power consumption, it may not mean much to desktop users, but if new environmental regulations come into force, then it will mean something.
Business' marketing divisions will get wind of the lower consumption devices being implemented will be able to spin that "Hey, we really care about the environment.", just as they do whenever they're forced into complying with new regulations.
Tegra was one of the processors used for Windows 8 rt devices including Lenovo Yoga. Nvidia lost out to Snapdragon because they didn't include a cellular model, critical for always on operation. With 5G, it starts to makes sense to decouple the chips
Wouldn't it make more sense for NVidia who are historically poor at making CPU's (but great at making GPU's) to buy someone like Ampere as their Altra ARM CPU's are seriously impressive in the Datacentre, and NVidia working with Ampere to make a consumer device architecture could potentially be a good and potent competition to the Apple M-Series CPU's inside consumer devices - especially because NVidia, Microsoft & others are already partnered with Ampere.
Windows and Microsoft ecosystem will waste that great architecture too. It is almost a cultural thing for Wintel companies, developers. They will waste any kind of CPU and IO for trivial, lame things.
I have a i5 laptop which sometimes runs DRM video etc stuff under Windows 10 having almost nothing installed. I actually wait for the system to be usable like 10 minutes after boot. There is also massively irritating fan sound.
Once I reboot the same machine to open SUSE Tumbleweed (rolling+btrfs) fan literally stops as early as it is booting and stays that way unless I do something like detect faces in 200.000 photos with all cores.
10 minutes is 10 times longer than vanilla Windows would take to settle down. I suspect you have a load of crud on that box so it isn't a fair comparison.
You could, however, fairly reply that MS don't make it easy to remove (or even identify) that kind of crud.
I would love to see something like the M1/M2 happen in the PC world, where you basically have 80%+ of the performance at about 20% of the power consumption. I don't care if it's ARM or RISC-V, the only real potential downside would be with gaming. Unless they include some PCIe interconnects to allow external GPUs, you're going to be stuck with whatever GPU came on the SoC until the end of time. It would almost be worth it to be able to get rid of the big bulky cooling systems on modern x86 CPUs and go either with completely passive cooling, or maybe just a fan that can kick on as-needed. Most apps could probably just be recompiled for ARM with few, if any, changes to the source code assuming they stuck with official APIs, and Microsoft already has an x86 emulation layer for abandoned apps.
Again, games would be the only major sticking point. You can recompile the engine and game logic to ARM without too much trouble, unless there's a lot of SMID type code in there, but without support for external video cards, it'll be DOA for gamers.
"Again, games would be the only major sticking point. You can recompile the engine and game logic to ARM without too much trouble, unless there's a lot of SMID type code in there, but without support for external video cards, it'll be DOA for gamers."
Almost certain, but as gamers generally don't give a hoot about power use they won't be drawn in anyway. From the AMD/Nvidia perspective, the global business PC market is (judging by a quick bit of searching) worth around 3.5x as much as the gaming market - why worry when they've already got a big chunk of that value with their GPU offer?
Intel's tried to grow by moving into the GPU market, and whilst I wouldn't see it as a reaction, it's inevitable that AMD and Nvidia are looking for other growth options rather than just relying hopefully on AI and exascale computing.
Since AMD/Nvidia are now doing the SoCs in this scenario, does the external GPU really matter? This would be like AMD's APU (coupling CPU/GPU into one unit). The only downside is that you couldn't upgrade them independently, but that's a consumer facing issue so probably not a concern to either company.
What incentive would Microsoft have to support ARM-based PC's and laptops? They'd be throwing out decades worth of backwards compatibility and may even up a Windows of oppurtunity for Linux to crawl through.
Better battery life you say? Microsoft doesn't care. Besides, the Windows RT debacle has made them headshy. It cost them billions and only resulted in angry customers saddled up with worthless and unsupported laptops, most of which have become e-waste.
Windows on those, old, expensive, heavy, hot, poor battery life x86 laptops
MacOS on these new thin, all day battery shiny M2 gadgets
"I don't know what an OS is but it does Teams / Outlook, Web and Facebook", super cheap, super thin laptop running some sort of ARM SOC and has mobile data
"Windows: the choice of the VAX generation"