"general-purpose RISC-V cores"
Are those Full RISC? Extended?
RISC-V chip biz SiFive says its processors are being used to manage AI workloads to some degree in Google datacenters. According to SiFive, the processor in question is its Intelligence X280, a multi-core RISC-V design with vector extensions, optimized for AI/ML applications in the datacenter. When combined with the matrix …
The general-purpose cores in the X280 are 64-bit RV64GCV CPU cores. As the name suggests, RISC-V is quite RISC.
RISC-V uses letters to represent extensions and features. GCV basically means the CPU cores support the base (bare bones) RISC-V instruction set plus support for integer multiplication/division, atomic operations, single and double-precision floating point math, compressed instructions, and vector math, and some other bits and pieces.
RV64GCV is all you need to run an OS like Linux and applications. It's fit for general purpose, and includes vector math support in hardware.
C.
A recent TheNextPlatform article published here, suggested that ARM had 10 years supremacy in the datacentre... RISC-V seems to be making a grand entrance already. What was that desperate comment by an Arm executive last week... "We respect RISC-V but it's not a rival in the datacentre". It bloomin' well is!
Are we actually watching Arm implode?
Always respect the bulldozer that is knocking down your house...
ARM is dead man walking... The model looks stale.
The US blocking technology to Asia has essentially pushed RISC-V into the forefront. Appealing because the US can't block it and they can't stop it. US semiconductor firms oppose the US Gov sanctions and blocking technology sales. The more the government does this, the worse it gets for US and UK and European businesses. The protectionists that post here are lunatics. Intel has suffered enormously from US Sanctions and they are committed to RISC-V. These chips will be a commodity but the add-on chips and co-processors will be customizable. The US will have to block those, just as they are blocking the Nvidia A-100. The net result is China companies have their own GPUs and open markets to US competition. How is that a winning strategy Mista Politician?
RISC-V can be built at 180nm and plans are on the board for SiFive to go 7nm. It is unblockable.
The US, Britain, and Europe are all assisting with their sanctions, by indirectly supporting RISC-V and destroying their own proprietary chips.
Linux should be shooting the moon on RISC-V.
Arm's got some legs for a while yet. Getting really compact and therefore cheap layout is still largely an art as far as I can see. I used to work with a bloke who used paper and pencil to layout small circuits - 150 FETs or so and he used to sometimes come up with layouts that were 10% smaller than the best computer compaction available in the late 80s. This was basically every conceivable positioning of every component using DRC and I like to think I'm not particularly dumb but he seemed to come up with layouts that were just mind bogglyingly effective. Come to think of it he might have been human AI!
The big risk to ARM is the same risk that is hitting Intel presently... the Linux kernel itself is highly portable (able to be ported to new CPUs and architectures as they come out), gcc/clang are highly portable, the general software stack of a typical linux desktop or server is highly portable. I recall reading about Google getting their usual stack up on a new CPU (like they decided to try some ARM, PowerPC, etc. systems to measure performance & power use) and were getting things up and running in a matter of hours if things went smoothly and like a day or two if it didn't.
Not that it'll put ARM out of business, but if RISC-V beats ARM on performance/watt and scales to high enough performance, that's likely to put a real dent on Arm's use in the data center; and for that matter if a future architecture (or surprise resurgence of an existing architecture) overtakes RISC-V it could easily cut into their use in turn.
Interesting use of this, I wonder if this is effectively CPUs with some AI accelerators hanging off PCIe lanes all on the chip, or if they further integrated things so the CPU and accelerators have full-speed access to each other's resources?
10s of cycles latency doesn't sound like PCIe to me.
That's got to be direct access.
TBH it actually sounds like this is more or less a custom GPU-on-Chip (GOC?) with the RISC-V handling communication and parcelling out the work to the accelerator hardware, avoiding the need for Google to reinvent the wheel.
Instead the well-known RISC-V handles all the basic logic, arithmetic and communication with the host processors (presumably via PCIe or IP), and Google's magic sauce handles the heavy lifting.