I still don't understand how Intel managed to mess up their transition from 14nm so badly.
Imagine where we'd be if Intel silicon was running at 5nm
Qualcomm saw what Apple's M1 chip could do for performance and battery life, and claims its next Arm-compatible microprocessors will do exactly that for Windows PCs. We're told Qualy will offer samples of its next-generation Windows-supporting Arm system-on-chips late next year, with the first products featuring the components …
Behind Arm running at 3nm? Keep up, 3nm is the new goal.
Which highlights the age-old argument RISC (Reduced Instruction Set Computer) v CISC (Complex) Instruction Set Computer) at the same die size. Arm will always win out (being less complex) because designing for the next smaller die size, Arm should always achieve a design to market quicker than Intel, unless Intel start throwing a magnitude more designers/testers at the problem than Arm.
The reason? It's right there, in the names.
At every stage of new smaller TMSC die technology, Arm has the advantage to market.
> Behind Arm running at 3nm? Keep up, 3nm is the new goal.
You can't mix marketing nomenclature from different fabs and expect them to mean the same thing. 3nm is the new goal on TSMC, but in the 'old' Intel nomenclature (before the recent change), Intel 5nm would have been roughly equivalent to TSMC 3nm, just like Intel 10nm is equivalent to TSMC 7nm, and Intel 7nm is equivalent to TSMC 5nm.
> Which highlights the age-old argument RISC (Reduced Instruction Set Computer) v CISC (Complex) Instruction Set Computer) at the same die size.
The Intel/AMD x86 designs are RISC now and have been for quite some time (20? years). They do have a CISC->RISC instruction 'translator' in there so that code can use the CISC instruction set, but once those CISC instructions hit the CPU front-ends, they are translated into RISC instructions for the RISC processing cores that they use.
Most of the die space is not RISC/CISC related, it's things like caches, buffers (e.g. TLB), out-of-order execution logic (i.e. Spectre processing engines ;) ), branch predictors, dedicated logic blocks (de/encryption engines, AVX, etc.), memory controllers, buses (e.g. ring bus, infinity fabrics). Most of the space is taken up by things other than the actual RISC/CISC ALUs etc. The AMD chiplets that have 8 cores on them but offload a lot of peripheral stuff (memory controllers/buses, etc.) to separate I/O dies are quite small.
Intel had a number of process improvements almost ready, so when they got ready for 10nm they got excited and added all their improvements to it in one go. It's a great and very advanced node but I suspect it became very difficult getting everything to work and yield well in one go, thus it took forever.
TSMC introduce their process improvements one at a time, that's why they have multiple successive versions of the same node and, well, so many nodes in general with slightly misleading names ("5nm" is not really 5nm, it's much bigger). This baby steps method seems to have been a safer way of progressing and ultimately faster... It's probably also good for pricing with new nodes coming all the time.
All of the above is speculation.
Ffs, if you're going to downvote, at least have the spine to put your counter argument across to explain why you think the original post is "off track".
A downvote with no explanation is just childish.
We're all adults here (I hope!). I don't mind being corrected where I'm wrong. Afterall there's only so many hours in the day to keep abreast of everything that's happening.
Microsoft have been doing amd64 on AArch64 emulation for a long time. They've not made a big deal out of it, and I've no idea how good it is but it's there.
The problem Microsoft have is backwards compatibility. Microsoft built their entire OS business on making sure ancient line-of-business software and accessories still work.
You can still use a 1980s serial mouse on Windows 11, and nearly all the early 32bit software for NT and XP still runs.
A heck of a lot of currently used Windows software is 32bit, and it doesn't appear to be feasible to emulate x86 on AArch64 with good performance - at least, nobody has said they've done it.
So they'll lose all the 32bit software, but unlike the 16bit there won't be a DOSBOX to put it in.
Apple on the other hand have a long history of telling users to go die in a fire if they want to keep that "old" software or accessory.