Last paragraph of the article"
"Meanwhile, around the same price points as the Comet Lake Cores, AMD's touting up to 12-core (24 thread) 7nm Ryzen parts clocked up to around 4.6GHz with faster RAM and PCIe 4."
Ouch!
Intel this week unveiled the desktop processors in its 10th-generation Core series, the headline component being the 10-core i9-10900K that can run up to 5.3GHz. These are refined 14nm Skylake-based parts – a microarchitecture that dates back to 2015 – rather than the 10nm microprocessors people have been waiting years for. …
And that's why I'll be going with AMD in my next system. They've proven they can reliably, steadily, consistantly produce 7NM parts while Intel is still stuck in 10NM land.
Intel is trying to bolster their sales by touting 5+GHz speeds, but then you look at the fine print to find out that's only on a single core, only if the chip can maintain it before thermal runaway, & only if you've installed the uber cooling system to keep it cool. Meanwhile AMD "only" runs at 4+GHz, but that's on all cores, no OC needed, & with just a plain cooling fan. If you do the uber cooling rig & OC that Intel does, you wind up with a part that (pardon the pun) smokes their ass.
I'll take the AMD for $500 Alex!
People must remember that TDP for Intel means the minimum power needed whereas for AMD it means typical power needed. What this means is the Intel part can indeed reach 5+ GHz, but it will probably be using 2 to 2.5 times as much power as the TDP states, whereas the AMD part will use only about 0.5 times more power to reach the advertised boost.
I jumped from Intel to AMD last year (last used a personal AMD system around 2011 I think). Very glad I did, have an 8 core, 16 thread system currently (3800X) and it just eats anything I throw at it.
One of the big selling points for me was AMDs ongoing support for the same socket. Something Intel just doesn't seem to get!
If I need an upgrade at some point, no need for a new system, I could drop a 3950X (16core, 32 thread) directly in the box, the TDP is even rated the same as my current 3800X (as the 3950X is much better binned), so I wouldn't even need to upgrade the cooling (which is overkill anyway atm).
Better yet, later this year the new Zen 3 based chips are due out, which will use an improved 7NM process, which should allow for lower power and/or higher clocks, plus Zen 3 improves the IPC again on top, so the 4950X (or whatever it becomes) should be even more of a beast of a chip than the current 3950X!
Unfortunately the Zen 3 chips are very likely to be the last for the AM4 socket, as Zen 4, due out some time next year, is using DDR5 rather than DDR4, so needs a new socket. But I can't see me needing above 16 cores for quite a few years to come!
Also with AMD, I'd expect the new AM5 (or whatever it gets called) socket, for Zen 4, to get supported for years to come, at least until DDR6 comes along.
Gordon Moore, and the last couple of decades deploying processors, has taught us to expect that compute tends to double every two years.
However this new i5/i7/i9 lineup is hardly double the speed. Worse, it looks to be near to double the price, power consumption and thermal output.
Was the R&D directed towards expanding things we don't see, like the on-die architecture supporting an AI enhanced Intel Management Engine, to expand the gaze of the NSA, and its many 'eyes'?
OK, maybe that's a bit far-fetched. It's probably just down to a neural network buying a seat on the board, and using it to build out a technology roadmap that suits the Rise of the Robots/SkyNet.
Moore's law really refers to the number of transistors on the chip. But it was only an observation of improvements in process technology. Clock speeds hit upper limits a long time ago which is why we started seeing multiple cores as a consolation. Intel used to be top of the process game but lost the crown a few years ago as it lost focus chasing after the mobile, machine learning, etc.
"Lost the crown a few years ago as it lost focus chasing after the mobile"
No, it was sheer laziness and hubris. AMD didn't have a better part, so Intel kept soaking us for everything it could get
HDD makers have been doing the same thing the last 9 years - culminating in this DM-SMR NAS/RAID drive fiasco - and they're about to see their SSD Waterloo as large capacity SSD is creeping ever-closer to their price point (enterprise SSD is as little as twice enterprise HDD pricing now)
Despite WD putting a brave face on it in the quarterly earnings report, WHO will reward the HDD makers for their past and ongoing bad behaviour by knowingly buying a SSD from them?
"No, it was sheer laziness and hubris. AMD didn't have a better part, so Intel kept soaking us for everything it could get"
What? Intel have sunk the best part of $100bn into 10nm fabs trying to get a working solution. What comes across as laziness and hubris is the product lines when the underlying issue is a need to move to a smaller process node with first generation equipment. By the time TSMC had demonstrated the 2nd gen equipment was able to address some of the accuracy issues, Intel had to wait ~2 years to get there equipment installed and able to compete. That 2 year mark will be somewhere around late Q3 2020 to 1H 2021 for 10nm with 7nm only 6-12 months behind (Intel are heavily dependent on the current and previous generation nodes to get all of there product lines manufactured)
The laziness and hubris argument doesn't match the reality - it would be like saying Lewis Hamiltons lack of podium finishes in 2020 was a result of waning talent ignoring the reality of the situation. Yes Intel may have told half truths to the market about the state of 10nm, but the market didn't care as long as they hit their numbers. And Intel have smashed their numbers even if it hurt some customers in the process.
If Intel had failed to hit the numbers, they would have struggled to get funding for 7nm fabs and no 7nm fabs means falling far behind the competition. 7 years on 14nm might have become 10 years on 10nm because Intel couldn't afford the number of fabs (at least 4, maybe 6 at US$20bn each depending on number of passes required and speed of EUV process) if banks withdrew credit/loan facilities in the face of uncertain production capabilities.
The laziness and hubris argument doesn't match the reality
Not regarding process technology. Intel was a bit guilty of assuming that it didn't need to do much to continue to succeed in the data centre space, where it made all its profit. And, let's face it, this is where it's still doing very good business. But it repeatedly fluffed mobile and wasted a lot of time trying to compete with nVidia for graphics and waited to long to get into the custom chip business.
The company deserves a lot of criticism but looking at the numbers, I wouldn't mind failing that bady.
"What's gone so wrong with Intel's 10nm process is what I want to know."
Intel haven't released too many details, but there have been both official and unofficial rumours. Intel say they were "too aggressive", targeting too many major changes at once for a new node.
The three major issues were:
- lithography: ASML were trialing new equipment for 10nm/7nm processes and the first generation was not able to accurately handle chips through the multi-patterning steps resulting in high defect levels. TSMC avoided the first generation equipment by "luck" (they were 2 years behind Intel) and Samsung used a slightly different process to Intel/TSMC which may be the best long term option.
- chemistry: Intel tried to move from copper to cobalt interconnects to address limits with copper but this meant Intel had to learn a lot about the new chemistry. This was within Intels control and they could have delayed it to a more mature lithography process. This resulted in high defect levels as Intel learnt how to manage the new chemical processes. Rumours are that issues were blamed on chemistry that may have actually been caused by other failings.
- process ordering. Intel tried to use a process known as contact over active gate (COAG) to reduce transistor size. This caused two issues - a significant increase in the number of process steps required (slowing down throughput) and a requirement for high levels of lithography accuracy which were not possible. This was the step that was likely "too aggressive" however once it was removed in 10nm "Tock" it still didn't result in products that could be manufactured (see chemistry and lithography)
Intel is likely to have have learnt an enormous amount about manufacturing chips on smaller process nodes but we may not see the results until 7nm is released in ~2022. In terms of addressing all the major 10nm issues, we should see parts in late Q3/Q4 of 2020 to judge. The current 10nm parts address the issues but Intel had to focus on yields rather than speed with the first batches of working parts - late 2020 will show if the speed issue can be addressed.
I seem to remember that 10nm was considered to be a bad idea – you get problems at this scale without sufficient benefits in lower power – which is why other fabs went straigt to 7nm: half the size so lots more transistors on the die, lower power and still the same quantum / RF problems to deal with. This gave Intel an advantage for a few years because of its established 14nm process.
"I seem to remember that 10nm was considered to be a bad idea"
Process node sizes are largely arbitrary - they signify a position in a hierarchy for one manufacturers processes but because they are made of of so many different measurements calling it 10nm instead of anything between 8 and 13 would likely indicate the same thing.
Known measurements for the respective processes is available here - Intels 10nm is "smaller" than TSMC and Samsungs 7nm. At present, Samsung looks very promising for next gen (Intel 7nm/5nm TSMC/Samsung):
https://en.wikichip.org/wiki/7_nm_lithography_process#P1276
https://en.wikichip.org/wiki/10_nm_lithography_process
I'm sorry to say, that 2x "speed" (Frequency aka Dennard scaling) every 3 years went out the window around 15 years ago.
Now, die shrinks result in higher core counts and a small increase in frequency. There is often a power reduction too, as long as it's possible to reduce the operating voltage (which limits frequency).
However as the article states these are 14nm parts so there is no die shrink. The increased power is a result of the increase in operating frequency. This is a non-linear effect because you need more voltage to achieve the frequency, and the dynamic power increases with the switching rate.
As far as I'm aware, the entire industry uses the same definition of TDP, which is "how much waste heat your thermal solution must deal with". BUT I believe it would be true to say that due to the smaller process nodes being more power efficient, a 10nm CPU can run for longer at a given frequency without the power dissipation getting out of hand.
Major improvements in CPU architecture take a long time to come around (just think of how many false starts AMD had before they got things right with Zen) and incumbent vendors get complacent and think they "own" the market - no conspiracy theories are needed to explain the fact that Intel doesn't have any exciting new CPUs. One hopes part of the reason this is such an uninspiring update is that all the bright boys are working on real architectural changes for 10nm.
I suspect that when Intel does get around to making 10nm desktop processors, they will catch up with AMD due to the die-shrink but won't be significantly better. In essence both Intel and AMD now have highly refined and effective CPU architectures, so future improvements are likely to be merely incremental.
"As far as I'm aware, the entire industry uses the same definition of TDP,"
They don't.
Somewhere along the line, intel went with the notion that as they could make their cpus draw low power when idling, they could reduce the TDP quoted to "typical use" as long as they qualified it in the fine print with caveats that full speed computing wouldn't be continuous operation, whilst AMD stuck with the benchmark of "all cores loaded, working at standard speed"
This redefining of the wheel is more or less the argument that HDD markers have been putting forward to justify DM-SMR in desktop and NAS drives - resulting in hard drives that work "just as fast" as the prior generation for 30-60 seconds before performance falls off a cliff and can go as low as 1% of normal speed (then selling these as "fast, for desktop professionals")
I've got 200W TDP xeon parts - and I've measured them drawing over 500W at full speed under load, before they even start playing turbo boost games
200W AMD parts will worst case draw 225-230W by turboing up a few cores
And, while not mainstream or even x86 derived, Raptor had some fun findings on the power draw of the nominally 95W TDP 4-core Power9 CPU from IBM: under load, apparently, they consistently drew around 65W and it was a struggle to get them to draw more.
The only conclusion that could be drawn was the 95W TDP was a “designed to” figure that had been specified before the chips were finalised.
Because erm Intel. Same node pretty much the same chips (just i9's renamed as i7's) nothing much new on the chipset (well Wifi 6 is nice to have as is 2.5GB Ethernet but fairly moot when nothing else on my network can communicate at that bandwidth) and a new top end with a 25% increase in cores (though still less than AMD's second place chip).
Power use is going to be insane at the top - remember all the jokes about AMD's chips that could clock really high but needed industrial coolants to get there? Ahead of their time.....
Won't need stickers saying Intel inside as you will know because your pc is now heating your entire house.....
But you will get an extra 2 fps so all good.
(where's the Chipzooky icon?)
4 years ago, Intel's top-of-the-line 6 core part cost more than this. The launch price of a 6 core Core i7-6000 was about $620, the 8 core was about $1100, and the 10 core was about $1725. Now, what has competition given us? A 10 core Core i9-10000 for about $500.
I love competition!
No because Intel (to be fair most companies) pick and choose what to compare. Everyone different.
However basically the table is AMD yes INTEL no
To be fair Intel can still be slightly faster at single core which some gamers prefer and have bribed some games companies to allow their games to look a bit better on their chips. Doubt most people would even notice and to get the performance INTEL far more expensive and runs hot.
Define performance? e.g. best frames per second, best performance per watt, best frames per $ spent etc?
e.g. Are you primarily a gamer, do you game and stream, are you focused on productivity etc?
To be honest, atm irrespective of how you define it, AMD win hands-down. Even with Intels new chips and prices cuts.
The only real single exception is if you want the absolute best frames per second, and are using an RTX 2080Ti (and you have no budget limitations), in which case the fastest i9 is arguable 'best'.
But for all other use cases, go for an AMD.
I you want to see comparison benchmarks, I'd recommend looking at Gamers Nexus on Youtube. Although don't expect anything other than an overview of these new chips current, as no benchmarks exist yet, as no one has the hardware yet.
For a more enthusiast level, rather than hardcore techi, have a look at JayzTwoCents instead, he's done a vid on the new Intel chips here.
Note these Intel chips are new, so no benchmarks yet, as reviewers are still waiting for the hardware, and then they will need some time to actually run all the benchmarks, so be a few more days for these to turn up.
Anandtech Bench will be updated with these CPUs eventually.
I think the real difference comes when you have thousands of CPUs rammed into the same racks for HPC. Here power consumption and are really important as the electricity costs are significant. Your aim is to have the cluster run as close to 100 capacity as possible to maximise the investment. If AMD can provide the same performance when executing the code but at half the power that is a big win. Throw in the cheaper CPU costs and you are now able to add even more compute for the same money.
We have used Intel for a few of the 2 yearly refresh cycles but now AMD is looking to be a real contender.