Only the Paranoid Survive
By making their next New Year's Resolutions in August
Intel today for the Hot Chips 2023 conference shed light on the architecture changes, including improvements to memory subsystems and IO connectivity, coming to next-gen Xeon processors. While the x86 giant's fifth-gen Xeon Scalable processors are still a few months off, the chipmaker is already looking ahead to its next-gen …
Next year they will challenge AMD - really? They must be delusional as AMD is already working on next years Intel killer.
Intel are playing catch-up and investing billions to do it. For their sakes I do hope that their market predictions are accurate, as one wrong turn, and Intel could be in serious trouble.
Intel and AMD are mostly owned by the same funds. Do you really think they compete with each other or just thread carefully to maximise returns?
I wonder what markets would have looked like if this kind of behaviour was illegal.
Like if either Intel or AMD could actually crush the competitor, but rather, they do piecemeal improvements and ensure that people keep upgrading every couple of years and you also have "familiar" concept of two teams, like Democrats vs Republicans, Pepsi vs Coca Cola, tea vs coffee and so on.
So, while both chips will be fabbed using the chipmaker's long-delayed 7nm process — now called Intel 3 — the two will have different feature sets tuned to their target workloads. For example, Intel's P-cores feature its Advanced Matrix Extensions (AMX) while this functionality appears to be absent on the E-cores.
This should make HPC scheduling fun as you'll almost certainly want your code to hit the cores with vector maths units :)
This should make HPC scheduling fun
It's bad enough already: the scheduler not only has to consider P-cores and E-cores but also hyperthreading on the P-cores too. And of course there might be short-lived turbo-boosts or longer-lived thermal throttling. The CPU can give some hints to the scheduler (some info about 'Thread Director' here) but it's a rather imprecise science (and, to be fair, has always been to some extent, even with a single CPU). And that's before you consider virtual machines and what might be considered fair scheduling for them.
I can perhaps see that in the fullness of time cores will become sufficiently specialised (e.g. some for OS housekeeping, some for vector/matrix operations, some for streaming data operations, some for general logic) that specific targetting will be the norm. In the meantime it will no doubt be a hot topic for PhD theses.
(burning good clean coal naturally) will it take to power these things?
Answers on a pinhead please.
The comment by abend0c4 is especially relevant. All these silly instructions they keep adding in microcode are just more attack vectors. It is long past time that we could run code right on the native hardware rather than have everything translated by microcode.
The horse has long since sailed for general purpose compute.
All the major general purpose architectures use microcode as of today, and have done for some time. There are more niche products usually marketed as logic controllers out there that will do that direct to hardware you describe. Performance, software? Every bit as niche as you would imagine.
They use micro operations.
Even the ARM1 used them:
The reason the decode is implemented in a number of separate units is because the ARM1 makes use of microcode ROMs (PLA). Each instruction is decoded into up to four µOP signal-wise. In other words, the ARM instructions are broken down into up to four sets of internal-µOP signals indicating things such as which registers to select or what value to shift by. For some complex operations such as block-transfers, the microsequencer also performs a looping operation for each register.
Article title--"Intel promises..."
We see that you have decided to join the "Always Twenty Years Away" club, Intel. Only one teensy problem--you ain't got twenty years. You ain't even got five (and "ain't" is as grammatically correct as what you do passes--you think--for 'engineering expertise'). You learned your lessons well from Boeing, when you decided that the key to a rosy future lay in turning your fate over to the bean-counters, and completely ignoring the fact that science and engineering is what got you to where you USED to be.
Ooohhh...and The Register--it's time for you to start digging out your recent rosy articles on Intel's glorious predictions of the future, written, no doubt, by the same bean counters and hacks who now run all the rest of the company. One that comes immediately to mind is how Intel will most definitely be profitable in 2024; that's right folks: in a mere four more months, Intel will be profitable again!
I believe Warren Buffet who says that investing in the semiconductor business is a fool's errand.
Intel most definitely proves him correct.