SSDD and I don't mean disks
Titanic. Deck chairs. Iceberg.
Needs better Feng Shui!
What could go wrong?
Intel on Monday shook up its engineering management ranks after not only admitting its 7nm manufacturing pipeline had stalled due to defects but also that it is considering asking rival factories for help. Chipzilla said, effective Monday, August 3, chief engineering officer Venkata "Murthy" Renduchintala will exit the …
Well they thought that they had brought in an outsider that had done this before, but then Murthy was probably just a bit too gung-ho about what he brought to the party.
He was the initial force behind Qualcomm's purchase of CSR, but during the extended due-diligence process he suddenly disappeared from the map after he'd lost a pissing contest with another Qualcomm big-wig.
Colour me surprised at his sudden departure from Intel.
The problem stems from Intel being one of the first in the business. They had developed their own proprietary process, something that's served them well for years but was bound to lead to trouble eventually. They was literally two sorts of chip a decade or more -- "Intel" and "Everyone Else" (wiith TSMC being the "Everyone Else" front runner) and as you can imagine, as geometries got smaller and the overall market larger it left Intel as a bit of an outlier. A hugely successful outlier, but still a company that has to literally do everything itself to remain competitive.
The smart move would have been to move to become more integrated with the industry as a whole years ago but institutional inertia is difficult to overcome especially if what you're currently doing is raking in the profits.
I suspect the phoenix was a victim of a round of cost cutting where the business was cunningly trimmed of all the capabilities needed to actually do stuff, whilst at the same time the senior executives found new ways to pay themselves more.
You don’t need to be left wing to be cynical about the mess that is western capitalism.
What could possibly go wrong with prioritising shareholder returns over investment in R&D and product development? That $25,000,000,000+ they had lying around to spend on stock buybacks in the last two years alone definitely couldn't have been used any better ($4B in Q1 2020, $13B in 2019, $10B in 2018).
"Kelleher previously oversaw Intel's manufacturing work, including the ramp up of its disastrous 10nm node." So, the person responsible for the 10nm disaster is in charge of 7nm and 5nm. Right.
I don't think it means what you think it means.
She was in charge of operations, not RnD:
She is responsible for corporate quality assurance, corporate services, customer fulfillment and supply chain management. She is also responsible for strategic planning for the company’s worldwide manufacturing operations.
She was in charge of the manufacturing in terms of building new fabs, scheduling and organising conversions of existing fabs (e.g. migrating a 28nm fab to 10nm), maintaining fabs, keeping them running (getting the consumable chemicals etc.), building new fabs as requred to meet manufacturing demands, etc. The physical infrastructure of fabbing.
RnD say "these machines can do 20k/month with a defect rate of x", but when she puts them into a fab, they only do 5k/month with a defect rate of 10x ... She can't build fabs to meet demands if the technology given to her (the litho machines developed by RnD) are complete shite and can't hit their specifications.
Yes, it's terrible optics that Intel's 7nm fab process is so b0rked, but:
- Apple doesn't make server chips.
- core-i9 is still a good proposition for desktops/laptops.
- NVIDIA doesn't make server chips.
- Marvell - good luck to them with ARM64. So far, no-one's ARM64 is beating Xeon on SPEC performance.
- AMD - doesn't beat Xeon on SPEC performance either.
The relevant benchmarketing standard here being SPEC.
Intel routinely exceeds a peak ratio of over 12 on SPECint 2017 speed, while AMD maxes out at 10.5. Link.
Some people dislike SPEC because they say it's irrelevant. However: SPEC is not a collection of purely artificial programs written specifically for benchmarketing purposes, and with no connection to real-life software. All the benchmarks in SPEC are real, mostly open source, programs, that were originally written for clearly defined practical purposes. When in Rome ...
Maybe nm bragging rights don't always translate into performance.
> From this article in March, looks like the AWS Graviton 2 [ ... ]
Only results submitted to, and published/verified by, SPEC are valid.
Claims made anywhere else about SPEC performance results that aren't submitted to SPEC are just marketing bullshit. Certain criteria must be met in order for a SPEC benchmark result to be considered valid. One of the criteria is repeatability. There are several other criteria.
If the claimed results weren't submitted to SPEC - and they weren't, because I searched for submissions on ARM64/AArch64, and there aren't any - that tells me everything I need to know about their validity.
Erm isn't that a per-thread figure? AMD offers more threads for less money and less power than the equivalent Intel chips so I'm not sure what the win is supposed to be for Intel (and you also don't need to turn off SMT/Hyperthreading because of security concerns with AMD).
> Erm isn't that a per-thread figure?
The SPEC ratio is a number that is generated by SPEC software. It represents the relative performance index of the submitted result. For SPEC speed, the ratio takes into account the number of threads, if the benchmark contains OpenMP parallel blocks of code (i.e. threads). The higher the ratio, the better the relative performance. Not all SPEC speed benchmarks use OpenMP. Many do, but some don't.
Security concerns, or cost of hardware or software aren't part of the SPEC benchmark parameters.
"Security concerns, or cost of hardware or software aren't part of the SPEC benchmark parameters."
Which is why you're getting all the downvotes for pushing SPEC as the only benchmark that matters. Even if Intel is still ahead on this one particular benchmark, no-one actually cares because a performance benchmark without taking other factors into account is completely meaningless. If some other part gets a slightly lower score, but is cheaper, smaller, more power efficient, offers more features, and so on, then that part is going to be the one people actually want. There are very few places that just want raw power at any cost.
You're also missing the large picture. The question isn't whether Intel can still win on a benchmark right now, but what the trend is and what might be the case a few years down the line. Until the last couple of years, Intel had a massive, unquestionable lead in desktop and server parts. AMD fell way behind a decade or more ago, and ARM was just an upstart mobile maker that occasionally tried to dip a toe into real computing. Now, both AMD and ARM are fighting Intel for the top spot, and you can pretty much pick which one you want to claim wins depending on which benchmarks you prefer. And now Intel say that they're falling even further behind while their competitors carry on pushing ahead. So what do you think is going to be the case in two or three years time? Argue all you like that Intel is just about clinging to the top spot for certain workloads for now, it's just not important to pinpoint the exact moment someone else nosed ahead; it's the fact that they're very clearly in the process of being overtaken that is important.
> Which is why you're getting all the downvotes for pushing SPEC as the only benchmark that matters.
Or maybe it's because the vast majority of commentards here don't understand the difference between an industry benchmark based on some objective parameters, and a personal opinion based on hormones.
That's like downvoting a blood test because you don't like the results.
If SPEC was as irrelevant as you claim it is, why are there so many official SPEC submissions from the industry?
> [ ... ] no-one actually cares because a performance benchmark without taking other factors into account is completely meaningless.
What other factors? Care to enumerate them?
If anyone has a better performance benchmark in mind, propose it, and have it accepted by the industry. Until that happens, SPEC is the only one we've got.
"If SPEC was as irrelevant as you claim it is, why are there so many official SPEC submissions from the industry?"
I didn't say it was irrelevant, I said no-one cares about a single benchmark in isolation without considering context.
"What other factors? Care to enumerate them?"
I already did, as have numerous other people here. Cost, efficiency, size, features, and no doubt plenty of other things depending on the use case. You seem to be obsessed with the fact that Intel can get the same score as AMD while using fewer cores. But as has been repeatedly pointed out, those AMD cores cost significantly less, need fewer sockets, and use less power. So sure, Intel win a benchmark on a performance-per-core basis. So what? Why do you think this is such a big deal that we should all care about it to the exclusion of all else?
> Cost, efficiency, size, features, and no doubt plenty of other things depending on the use case.
Cost is not a SPEC parameter. And it's not quantifiable anyway. Both Intel and AMD charge what the market will bear. And the price paid has nothing to do with the price advertised anyway. So, we don't know what cost even means here.
Size is not a SPEC parameter. I don't even understand what size means in this context. Size of the die? Size of the chip itself? Size of the chip socket?
Define efficiency. Did you mean power consumption? It's not a SPEC parameter. SPEC actually has defined an output for power consumption, but no-one ever reports it.
Features? What features? What does features mean here? Is there a list of clearly defined features?
no doubt plenty of other things depending on the use case: What other things? Can you list them? I use AMD chips as coasters. Is this a valid use case?
You do not appear to have a minimal theoretical grasp of what a benchmark is. You keep mixing in nebulous and undefined terms -- features -- that appear to suit your confirmation biases of the moment. And when faced with the actual results of the benchmark -- namely numbers produced in a controlled environment that followed the evaluation specs -- you conveniently ignore them, if they happen to contradict your expectations bias. Case in point: So what if Xeon objectively produces better benchmark results?
Or you counter them with undefined terms for which no information is available. Case in point: features.
All of this tells me three things:
- you've never run a benchmark of any kind.
- you've never been tasked to run a benchmark of any kind.
- you can't be trusted to run a benchmark of any kind because you are incapable of isolating your expectations bias / confirmation feedback loop from the benchmark results.
Meaning: if you are faced with an outcome that does not match your expectations bias, you will intentionally skew the benchmark results just to confirm your bias.
Cost may not be a factor in the SPEC benchmark suites, but cost does appear in other widely recognised benchmark ratings - for example the TPC benchmark family, including for example the TpmC benchmark and the associated Price/TpmC number.
Readers who already know everything won't need this link, but others might find it (and others related to it) interesting:
Readers who look *really* carefully may find there's even a specification for "energy" too, in terms of (e.g.) Watts per thousand TpmC
edit: the CoreMark benchmark family also has "energy efficiency" as part of the options to be measured.
Sometimes there's more to life than SPEC.
> Or was that the only metric that supports your view?
No, that's not why I only mentioned SPECint speed. I only mentioned SPECint speed because mentioning all four benchmark sets would have taken waay too much space with links and all. And because SPECrate performance is much more dependent on the performance characteristics of the system as a whole, as opposed to SPECspeed.
> Epyc seems to do pretty well in FP rates [ ... ]
pretty well is besides the point. Does it do better, the same, or worse than Xeon? From what I can see, it still can't beat Xeon.
> [ ... ] the latest Epyc are regularly scoring in the 500 for FP Rate and near 200 on FP Speed.
The AMD benchmark was run with 128 threads.
The Xeon benchmark was run with 96 threads.
Both benchmarks scored a peak ratio of 212.
What follows from these results it that Xeon vastly out-performs Epyc on SPECspeed 2017 FP.
The higher the number of threads, the higher the score in the SPEC ratio computation. So: if Xeon manages to score the same ratio with a lower number of threads compared to Epyc, it follows that running the Xeon benchmark with the same number of threads as Epyc would necessarily score higher.
I didn't have the time to search through the hundreds of submitted SPEC results and find the absolute perfect optimal submission for either manufacturer.
Here’s the problem for Intel:
They don’t even come top of the table for SPEC performance. (that honour goes to a Fujitsu SPARC machine tested back in 2017, see https://www.spec.org/cpu2017/results/res2017q4/cpu2017-20171211-01435.html )
AMD are currently making CPUs that in 2 socket configuration need an 8 socket Xeon machine to beat them (see https://www.spec.org/cpu2017/results/res2020q2/cpu2017-20200525-22554.html ).
The Intel boxes are vastly more expensive to buy and to run (all those sockets need lots of power).
There is a limited market for “absolutely the fastest machine you can buy”, mostly companies want the best performance they can afford within their budget, or target a given performance level and then see how cheap they can buy it. Intel have a certain amount of inertia that they can rely on here, as it takes companies a while to test and qualify new hardware, but they are starting to come under fire as the AMD alternatives are looking increasingly attractive. They need to be cheaper and consume less power to compete, but to do so needs a smaller, more advanced process than 14nm, which they haven’t really got (even 10nm isn’t ready for server grade chips yet).
but you can buy 2 epycs with twice the cores for the price of one Xeon
so if you compare like for like the Epyc is better bang for buck
thats with Rome.
Milan on Zen3 is on its way by the end of the year, and there is plenty fab space in the pipeline.
With Intel not only faltering in 10nm and 7nm, but having limited capacity in their 14nm+++++++++++++
node, OEMs not only will want to go for EPYC for speed and cost savings, but might have to due to lack of Intel chips available.
You're right that a reduced process size doesn't mean higher performance by itself. But it does mean lower power usage, a smaller die and hence lower manufacturing costs per chip. Or of course you can use that extra transistor budget you now have to add extra cores, more cache, or for a tweaked die design that uses additional transistors to increase the performance of the chip.
Not saying it isn't possible to compete with an older process. AMD themselves managed it back in the early 2000s with the Athlon, Athlon XP and Athlon 64 that competed well whilst usually running a step or two behind Intel from a process size perspective.
Still, a smaller process size gives you more options and flexibility, and AMD is currently reaping the rewards of this.
You're right that a reduced process size doesn't mean higher performance by itself. But it does mean lower power usage, a smaller die and hence lower manufacturing costs per chip.
Correct on the first two points, but your third point (lower manufacturing cost) does not follow. The process may be inherently more expensive (making anything with tighter tolerances usually is), and/or there may be a lower yield per wafer. From my own (different but related) experience, shrinking a PCB by using smaller track & gap widths, smaller BGA parts and smaller vias may result in a smaller PCB, but rarely in a lower price per board.
I agree, it's mm2 of Silicon per versus performance, not there marketing fluff that is now "node bragging rights".
The real danger here is that the West appears to have lost leadership in process technology, and it's a hard thing to regain ..... It going to be grim when TSMC become a monopoly for CPU fab
Quote: "core-i9 is still a good proposition for desktops/laptops."
Not really, too costly, too hot, uses too much power. One of the few remaining technical benefit Intel has over AMD is single core performance, which is only by a small margin now with Zen 2. But this is also irrelevant for most people and for most software, where more cores is usually better. If you're a hard core gamer, with unlimited budget, then maybe go for Intel, but otherwise AMD all the way.
Also the single core lead of Intel over AMD, which is basically achieved through raw clock speed for Intel, is quite likely to be lost with Zen 3, which is due before the end of the year, (and Intel have nothing to compete currently, unless they pull something unexpected from their hat before the end of the year). Zen 3 has IPC, clock speeds and internal optimisation (reducing known bottlenecks with the Zen 2 architecture) gains over Zen 2, and most analysis I've seen expect these combined to make Zen 3 at least on par, if not faster than Intel for the majority, if not all, single threaded workloads, and AMD already have the core count advantage, so are very likely to pull ahead of Intel on their last bench-marking advantage, namely gaming.
Quote: "NVIDIA doesn't make server chips."
Erm, yes they do. Their data centre revenues were around $3b last year, which is over a quarter of their business and growing.
Granted they don't make CPUs (yet), but these are still 'server chips', and they've been sniffing around ARM, which for them would likely be a good purchase, as they'd be able to build complete server solutions then with an ARM based CPU plus nVidia GPU. I could easily imagine nVidia bringing out an ARM CPU, at 7nm or even 5nm, made by TSMC or Samsung, in 12 to 18 months time, main issue likely being getting space at a FAB to produce them.
> Erm, yes they do.
[ In response to NVIDIA doesn't make server chips ].
Granted they don't make CPUs (yet), but these are still 'server chips' [ ... ]
First, you're contradicting yourself.
Secondly, I said nothing about revenue. I don't care about revenue. The article is about Intel's 7nm fab process, not about revenue.
Thirdly, these aren't 'server chips' anymore than they are 'laptop chips' or 'desktop chips'. These are GPU's. Do you know the difference?
Lastly: do you work in marketing somewhere by any chance?
If a chip is designed and built to go into a server, then it is by definition, a server chip.
A GPU is still a processing unit, it's even in the name, and these chips are specifically designed for high end number crunching in data centres, including super computers, so more specialised than a CPU, but again, still a processing unit that goes into a server, ergo server chip. It's not like you can plug a monitor into these things.
And no, I don't work in marketing, I'm in IT, specifically a solutions architect. I help design and implement large scale enterprise solutions. Where do you work, the Daily Mail? Shelf stacker somewhere?
> It's not like you can plug a monitor into these things.
Really? You can't plug a monitor into an NVIDIA card? Or a mobo with an on-board NVIDIA GPU? Not even a little bit?
> If a chip is designed and built to go into a server, then it is by definition, a server chip.
Awesome! That clears it all up. Keep'em coming, mate.
> The server targeted parts indeed lack video ports [ ... ]
Nope. They don't lack video ports at all. They have at least two video ports.
And thusly, you've just announced to the world that you have no clue what you're talking about. Evidently you've never seen one of those NVIDIA boards that's targeted for the purposes you describe. But you're an expert.
Too late now, but why don't you go take a peek at NVIDIA's site - or Amazon. They have pictures of those video boards that are used as GPU co-processors.Yes, all the models have HDMI out ports. You can attach a monitor.
As for myself, I installed one of those super-expensive NVIDIA boards just last week in one of our boxes. Because I'm playing with CUDA at work.
My <insert deity here>, just how stupid are you!?
You specifically brought up server chips, that was YOU, I am not talking at all about consumer GFX cards like the RTX 2080 etc. These are a completely different product.
As Steve Todd has mentioned, the server GPU are not GFX cards, they are specifically built for server use, rack mounted, and use CUDA etc to run tasks.
Here's a vid showing someone fitting some of them into a rack system...
As a customer, the important thing is to know which benchmarks serve as an effective proxy for your application. For example when I worked in computational fluid dynamics there was a particular SPEC subtest that showed the same sort of variation between systems as the performance of our own code.
Whatever methodology you use, the goal is to characterise real-world performance at a system level, and use that information to seek out price-performance sweet spots.
And there are lots of variables here - not just the processor choice, but also the acceptability of different compilers in your organization and the level of willingness to use aggressive & compiler-specific optimization options.
For example the relative performance of Intel and AMD systems will depend on whether the code is compiled with Intel's compiler or gcc, whether the application can make good use of AVX512 (not supported by AMD), etc.
AC because now I'm tuning benchmarks for a hardware vendor. Interestingly we are allergic to making comparisons with the competition, rather we aim to "put our best foot forward" and show that our new kit outperforms previous generations.
A smiling Murphy quipped "you just can't make this shite up."
He hasn't stopped smiling...
Maybe so, but the laws of physics are a harsh mistress and they don't give a rat's ass how good your competition is or how well they're doing.
It's been a long time since I worked in the semiconductor industry, but even then people were chewing over the practicalities and problems associated with sub-10nm nodes - once you start getting down to atomic scale there are some very real problems to overcome.
Memory is fuzzy, but 'state of the art' back then (early 2000s) was 65nm and the transition to 45nm was starting to gather pace, as was the use of 300mm wafers to increase yield - 10nm was still a way off, but it was still weighing on the minds of people far smarter than me.
That is/was closer to the truth than you can possibly imagine ...
Back then, the principal question seemed to be "how can we mitigate quantum effects?" whereas now, as noted upthread, there seems to be a split between "how can we mitigate quantum effects?" and "how can we make quantum effects work in our favour?" - as node sizes shrink, these questions, and their answers, become rather more important.
Of course, we all know that the real answer is "you never know until you look" ;-)
Has there been any more recent news on Krzanich's alleged insider dealing?
"Has there been any more recent news on Krzanich's alleged insider dealing?"
Wondering, too. Maybe some US posters can tell us ? I thought the SEC was coming hard on people doing this ???? Unless the french AMF, who is still sleeping peacefully.
This kind of shit, like I was made aware myself in the past, is the reason why I **never** buy any share.
There is no industry standard to denote node size and the Intel 10nm node is about the same overall feature size as the TSMC 7nm node afaik.
Intel seems to have bet the boat on EUV litho but it wasn't ready for 10 nm and seems to be having problems getting it to work for 7 and 5 nm for it's processes. TSMC is now ahead of the curve by miles.
That is true, Intel have lost its prior technical advantage but perhaps is not yet that far behind the industry peers.
However, missing their committed business-critical timeline by years not only once but twice indicates a severe problem in business execution. Given the nature of their trouble, it's concerning that the current CEO has background in finance, not in engineering or operations.
Intel have been slipping for some time now and this is becoming a bit or a downward spiral. For the last few refreshes we have been using Intel on our HPC clusters. This is now looking highly unlikely for the next as AMD is now a real contender again. Lower cost and power equates to more cores/threads/cycles and capacity for the same budget. Arm is also starting to look interesting for some workflows as well.
The problem for Intel is that nobody now believes that they can deliver on their roadmaps for any CPUs beyond 14+++++, compared to AMD where nobody doubts this any more as TSMC keep hitting (or beating) all their targets for rolling out new high-yielding process nodes.
Intel stuck with inhouse fabs and lots of different big monolithic chips because they could afford it and this is what always worked for them in the past, and ignored the oncoming train wreck. AMD were forced to go to foundries because unlike Intel they had no choice, and they went to chiplets partly because they couldn't afford to do multiple foundry tapeouts for different monolithic-die SKUs.
In hindsight, these look like two of the smartest decisions AMD had no choice but to make, and two of the dumbest decisions Intel made out of pig-headedness ;-)
For the chiplets, one of the other reasons was apparently they were expecting lower yields from the new 7nm process. and the smaller you can create your chips, the better the yield results are overall. (i.e. Less wasted wafer from bad silicon, when using smaller die sizes).
It also meant the IO die, which doesn't need to run at the same speed as the CPU itself, could be made on an older, mature and so cheaper node, and of course those didn't need to be made by TSMC.
All of which helps keep the costs down of course!
As a physics student in the late 90s... Uni lecturers were all adamant that Quantum Tunnelling represented a hard limit on CPU die shrinking - with the limit somewhere of the order of 5 to 12nm. 20 years on, those limits are being poked and prodded by CPU manufacturers. It's remarkable that AMD's supply chain has managed to pull anything working off at this range at all given the esoteric and incomplete knowledge we have of operation in that space. It is equally remarkable that Intel has missed out. One assumes the IP is being carefully corralled by the scientists and lawyers that "can" while intel missed the boat.
Rather less esoteric - Intels recent decisions to lock out low-end motherboard chipsets from basic options that have been available since the very first Core CPUs are pure profiteering that serve to drive budget customers further away. Performance driven customers almost universally look to AMD now. I do wonder with ARM-on the desktop becoming reality whether they starting to retarget their efforts away from X86. Wintel isn't what it used to be and they may want to dis-associate.
Biting the hand that feeds IT © 1998–2020