
“Freedom units’ teehee
Cerebras revealed its latest dinner-plate sized AI chip on Wednesday, which it claims offers twice the performance per watt of its predecessor, alongside a collaboration with Qualcomm aimed at accelerating machine learning inferencing. The chip, dubbed the WSE-3, is Cerebras' third-gen waferscale processor and measures in at a …
Ivor Catt developed and patented some ideas on Wafer Scale Integration (WSI) in 1972, and published his work in Wireless World in 1981, after his articles on the topic were rejected by academic journals.
I remember those articles. Also later ones about autorouting to avoid defects.
He's still alive.
And that is the curse of being too far ahead. The Patents run off before the uptake takes place, leaving the inventor with no benefit.
Now, had he come up with a mouse with big round ears, then he would be a rich man today. (yes, I know that steamboat willie just exited protection)
This post has been deleted by its author
|The engineering that makes this possible is mapping out and routing around the bad cores.
The problem has always been that this costs money and wafer area, so it was always more cost effective to chop the wafer up, test each chip and sell them separately
It's only with tasks that need a bazzillion cores and 44GB of ram for a single task that this thing starts to make sense
I quite like Cerebras' non-standard approach to computer architecture, going against the grain of "accepted" concepts, and into a successful dataflow perspective, with distributed memory, on a single wafer. Doubling performance at the same power level by going from the 7nm WSE-2 to the 5nm WSE-3 is very nice. Given that WSE-2 had 850,000 AI cores and WSE-3 has 900,000, should we presume that the higher performance results from faster clocks?