I'm not sure this is the right approach for the future. In fact, it seems more political than anything. Politics aside, what we need is not so much CPU architectures as entire system architectures. Traditional CPUs are designed for old-style computing, when scientific calculations were done and really the machine was dedicated to that.
But now we have multiprogramming and new apps being loaded into machines by innocent users who know nothing much about security. Current CPU architectures (and languages) offer very little in the way of security being based on the thinking that you own the whole machine and can see the whole of memory as a flat space.
Modern systems – even at the low level of an OS need structured memories that respect boundaries. A quick browse through the RISC-V documentation revealed no clues as to any such support for real modern computing.
That seems a shame to me, and most probably a lost opportunity to rethink things. It seems a shame that performance is still put way in advance of user protection, which – built into the lowest levels of architecture – could be implemented in the most performance effective way, rather than building loads of software on top that is far more effective at sapping CPU cycles.
You could look at this idea as the inverse of distributed computing. Instead of a process being distributed, many processes are implemented on a single machine on virtual processors (this is hardly a radical idea either), but the very ability to do this is baked into the system (CPU) architecture. Smalltalk was also an attempt to view the world in this way.
Once systems are designed and implemented in this manner, real distribution becomes easy (but that is another subject).
These are not really new ideas though – they need revisiting.