
28mm CMOS process technology
That's not going to be easy to package ;-)
I'm guessing that should be 28nm!
Western Digital today finally flashed the results of its vow to move a billion controller cores to RISC-V designs. RISC chip Linux lobby org joins with RISC-V bods to promote open chip spec READ MORE WD said last year it needed an open and extensible CPU architecture for its purpose-built drive controllers and other devices …
This post has been deleted by its author
It is hard, as you say, to know when data actually hits the platter. The only one to know is the manufacturer who has proprietary means for knowing such things. And it is such things that makes chaining a tasty opportunity. This also represents strong vendor lock-in which has rarely failed to tempt manufacturers in a competitive market.
So, using this data, they can assure platter rotation sync, and known phase difference, know when data reaches the platter and provide features that a third party controller vendor just cannot. Chaining will also allow for hot/cold standby and phasing in that to the end user appears simple: everything just works and just pull out the HD with that flashing red light and put in a new drive from the same manufacturer. This is how lock-in starts.
You won't find many drives that can sustain more than 1Gbps, even sequentially, definitely not random reads/writes. You'll only see that briefly to/from cache. Besides, SAS and SATA are higher speed in part so they can be shared across several drives. Ethernet switches give everybody the full 1Gbps. And let's not get into overhead. USB3 claims 5Gbps, but is actually a bottleneck to drives than can't sustain 1Gbps.
Memory is today one of the slower parts of computing. Whenever your CPU actually has to access it it takes a long time. Caching solves a bit of the problem, but it quickly gets very difficult.
Wouldn't it make more sense to not have one of the slowest part of your computer be your bottleneck?
"The alternatives are 'file centric' architectures"
No, there's another obvious architecture, message passing. If you build your interconnection on asynchronous messages it can scale very well. It's the concept the Transputer used.
Memory is today one of the slower parts of computing. Whenever your CPU actually has to access it it takes a long time. Caching solves a bit of the problem, but it quickly gets very difficult.
Memory isn't inherently slow, certainly it can be physically made far faster than current modules. The problem is the interconnect: the physical distance adds latency and limits on how quickly you can modulate even PCB traces limit the bandwidth. I suspect in the long term massively parallel systems with relatively small on chip stores will be the answer, but making that work needs a fundamentally new programming paradigm and new algorithms.
> ...I'm hoping to see something like Rasberry Pi but using RISC-V emerge.
Broadcom is one of the few networking chip manufacturers who are not (yet) member of the RISC-V consortium. They seem to have stayed with ARM. Next Raspberry Pi 4 is expected early 2019 using a different linewidth so it is too short time to switch architecture completely.
That they could do is to add a few RISC-V cores for real time tasks, the same architecture TI uses with some of their systems. Fast realtime with fast ADC and new VideoCore would make for an interesting SDR radio platform.