Heating
It's interesting that Pat has not found a career in building heating systems.
A PC with an Intel processor can heat an entire room during the winter.
Intel CEO Pat Gelsinger sees a future where everything is a computer assembled from chiplets using advanced packaging technologies like Intel's own as the chipmaker seeks to keep Moore's Law alive. Gelsinger was delivering a keynote from the annual Hot Chips conference, running this year in virtual form. As you might expect, …
Come on El Reg, it's Moore's law, not Moore's Law. If it were a "Law" it would have a formula you could plot on a graph and stuff. And Intel talking about it as if the economics that underpinned it have in any way been in operation for the last ten years - forget Moore's law, we're in Amdahl country now.
The drivers behind this are that customers don't just want more chips, they want more powerful chips, because AI models are getting larger and data volumes are getting bigger
AI models aren't being computed purely on digital silicon because that is the best way - it's just because that's currently the only only option. It's hugely inefficient - we know human brains can develop a language model using a fraction of that energy.
Real breakthroughs do happen too.
My thoughts exactly. If you want to make an enormous AI model, then use a neuromorphic chip. The performance gains could easily be several orders of magnitude.
I'm not sure why there isn't more pressure on this front. My best guess would be that the fundamental design of how neural networks work is still an area of very active research, so making a huge investment in one specific neuromorphic design carries a very large risk.
Which is reasonable - although, at some point, the economy of scale has to tip. Probably as soon as someone figures out a really good and widespread use case for very large models.
Just last week I read an announcement about a new AI part that's apparently partly analog. Its designed to do the job that intense -- power hungry -- digital computations do at the present, the goal being to build technology that can be incorporated into devices instead of having to be in the cloud as it is at present.
So the work's ongoing. I expect in a decade or two we will come to regret making our devices smart (see the current series of "Non Sequitur" comics for suggestions).
I can't help but think a logical thing for Intel to do would be to get more support out there for developers to take advantage of the silicon.
Processors get fatter and have more features, but in practise you are dealing with 5, maybe 10 layers of abstraction by the time you are grinding your problem in API and IDE of choice. Can you rely on all those layers to take advantage of fatter silicon?
NVIDIA caught onto this and produced the widely used CUDA API, which very definitely lets you exploit the hardware without too many headaches.
By contrast Itanium, (amongst other causes) failed software, particularly compilers could not take advantage of the systems features.
Indeed. It’s a sad sight that Intel are reduced to putting the parts together that were manufactured by other fabs. I’m still unsure if Intel’s complete failure to keep up with process shrinks is down to complacency or they just don’t have the best engineers any more.
Tick tock people.
And let's skip the "EUV" BS. They're into X-ray territory already.
The challenge is to find a way to do at scale.
IE whole chips or wafers at a time.
A Smith-Purcell generator could do it to 3.5nm using crystals of metal salts as the grating.
Beyond that things get kind of tough.