Well, if I won the lottery, I'd be on an "evaluation" system in no time. I've really got an urge to play with RISC-V based silicon, and this is a very different beast than traditional CPUs, so it would be a gas to experiment with. I see no reason VMWare couldn't support this "chip" as one of the fattest workload systems to hit the data centers to date - who other than SPARC ever came close to 1000 general purpose cores in the past? And SPARC was no where near close; they just focused on a plethora of low-power cores running webloads.
Samsung, others test drive Esperanto's 1,000-core RISC-V AI chip
Samsung's IT services arm and other companies are said to be testing out a processor that sports more than 1,000 general-purpose RISC-V cores to deliver what the chip's designer claims is faster and more energy-efficient AI inference performance than power-hungry specialty silicon. The chip designer, Esperanto Technologies, …
COMMENTS
-
-
Saturday 23rd April 2022 09:45 GMT Justthefacts
Re: A bit late?
Very late indeed. Yes, there are many hugely more powerful chips you could be buying. Everything is horses for courses.
This one is specifically being touted assessed for *inference*. Okey-doke, well that’s quite a specific workload, NVidia A100 does nicely on that workload. The EtSoC1 mounts 1000 cores @128 INT8 Ops/cycle, @2GHz, which is 256 TOPS. A100 mounts is achieving 620 TOPS even without sparsity and Hopper is treble that. The EtSoC isn’t going to trouble any scoreboard with that.
Oh, you meant 1000 *general purpose cores that can run Linux*? Bait-and-switch…..Well, if you’re expecting general purpose, you’re going to be sadly disappointed. First off, they’ve got *blocks of eight cores sharing each 32kB L1 instruction cache*. Each two-core pair only has round-robin access to its instruction cache once per four cycles. Think how crippled that cache bandwidth is. Basically, *unless* it’s running the tight-loop matrix-multiply, only a quarter of the 1000 cores can even execute any instructions, and even then they have to be running perfectly optimised coordinated dual core assembler to avoid stepping on each other’s toes. So in reality it’s 128core single-issue, where each of those cores is roughly half the oomph of an x86.
And no, I haven’t even started on how crippled this chip actually is for general purpose Linux, because wait till you find out that the L2 cache is just 1MB per 8 cores. And that there’s no cache coherency implemented.
Oh, you meant “open source core” did you? Bait-and-switch again…… Esperanto core is about as proprietary as it gets. They’ve implemented proprietary instructions, as anticipated by RISCV standard. Not just are they not open-sourcing those, *they require an NDA signing to find out what those do*.
-
Saturday 23rd April 2022 10:25 GMT Anonymous Coward
Re: A bit late?
Thanks for the details; that derails my thinking, and just shows how companies over-state the capabilities of their products.
So I guess it is Arm64 or Amd64 architectures if you want proper general-purpose processing.
What I really hate are these new "mixed mode" consumer/gaming CPUs. That used to drive the low-end workstation processor market; with that move, I'm forced to "upgrade" to a processor that is likely to cost $3-5000 for the next box instead of $500-1000. What ever happened to the middle ground pricing tiers for high core counts?
-
-
Saturday 23rd April 2022 21:17 GMT Justthefacts
Re: A bit late?
Well, as it’s a factor of 3x-8x non-competitive to NVidia for AI TOPS. As for TOPS/$, why do you think it would be cheaper than NVidia Tensor? This is fabbed in 7nm, being pitched against Hopper 4nm. This chip is giant silicon area, it’s going to be stupid expensive.
https://www.eetimes.com/wp-content/uploads/Esperanto-Glacier-Point.jpg
And even more importantly, NVidia has much more pricing power with TSMC than Esperanto. Esperanto will be paying at least double per mm2.
But in one way, you’re right. There’s no need to guess. Amazon are assessing, and if this chip is price-competitive they will buy. You should see Esperanto splash the design win on their website within the next quarter, here.
https://www.esperanto.ai/news/
Watch that space. ROFL.
-
Sunday 24th April 2022 17:26 GMT Anonymous Coward
Re: A bit late?
> I want RISC-V not nvidia tensor which will be astronomically expensive compared to this I would imagine [ ... ]
If this Esperanto thing becomes somewhat viable, it will also become just as astronomically expensive as NVIDIA.
Free and (somewhat) open RISC-V ISA Spec does not translate to cheap silicon implementation.
The business model around AI/ML in Silicon Valley is centered around getting a big slice cut out of NVIDIA's money pie. Not much else.
The VC's are in it for a reason, and it has nothing to do with cheap or less expensive.
-
-
Saturday 23rd April 2022 14:48 GMT BOFH in Training
Re: A bit late?
Good details. This blows a hole on running a web server with 1000 threads on a single chip.
They may still find a place in ML, if they are priced accordingly. So if they are 1/3 the cost of an A100, and cheaper to run (20 watts for the chip compared to how many 100s of watts for the A100?).
Maybe edge computing when there may be a requirement to be more power efficient?
-
Tuesday 26th April 2022 08:05 GMT Francis King
Re: A bit late?
There have been so many of these chips with a massive number of cores. Parallela had one designed: https://parallella.org/2016/10/05/epiphany-v-a-1024-core-64-bit-risc-processor/. Each time the same problem - with that many cores, each core doesn't get much cache memory.
Great headline though.
-
-
-
-
Saturday 23rd April 2022 11:12 GMT Mike 137
"a processor that sports more than 1,000 general-purpose RISC-V cores"
A faster Transputer array on a chip?
I recently found on my library shelf the 1986 Inmos draft Transputer reference manual. Sadly, although risc and exhaustively interconnectable, the Transputers never really took off. Sometimes it takes decades for a good idea to be finally realised.
-
Sunday 24th April 2022 16:51 GMT Anonymous Coward
Re: "a processor that sports more than 1,000 general-purpose RISC-V cores"
I have a transputer board. I used to work on some financial modelling software which could use them. We stopped when you needed £30K of transputers to match £2K of PII, Fascinating to work on programming them though. I think the programming model still has something to teach us these days.
-
Tuesday 26th April 2022 08:08 GMT Francis King
Re: "a processor that sports more than 1,000 general-purpose RISC-V cores"
"Sometimes it takes decades for a good idea to be finally realised."
They could do that now - a cut-down motherboard with a modern AMD/Intel processor, soldered on memory, serial channels - it's such an obvious idea, I suspect that someone has already done this.
-
-
Sunday 24th April 2022 07:42 GMT John Savard
Impressed
I am amazed that someone is able to fit 1,004 processors on a single die. (Or should that be 1,028 processors?) (EDIT: I see it's 1,092 processors, 1,088 plus 4.) I didn't realize this was even possible yet.
I could be understanding it wrong, but it seems to me that only the four high-performance processors can access off-chip memory. So this chip is sort of like the CELL processor from IBM, except that all the cores have the same instruction set.
-
Monday 25th April 2022 03:15 GMT Kevin McMurtrie
Maybe a media codec chip
This might make for a great software defined media codec chip. All of the pattern searching to efficiently compress natural media takes so much computational power that it needs limitations to be practical. A chip like could improve the quality per bitrate in live 4K video recording. It could probably improve playback quite a bit too.
-
Monday 25th April 2022 15:10 GMT Anonymous Coward
Specialty?
" a processor that sports more than 1,000 general-purpose RISC-V cores to deliver what the chip's designer claims is faster and more energy-efficient AI inference performance than power-hungry specialty silicon."
Eh, maybe i was sleeping and am now splitting hairs, but 1,000+ cores on a single chip still doesn't qualify as "specialty silicon"? In whose dictionary?