Re: I think the eternal question I've never really understood about the transputer is
[Author here]
There are a bunch of things that remain difficult problems, as I understand it. (I am _not_ an expert in parallel systems or anything!)
Transputers had hardware-controlled comms links, so building meshes or grids of them was relatively easy. No modern chip has that. Instead, we have processors with lots of independent cores, but it's the OS's problem to allocate tasks to cores and use them efficiently.
A little bit of that was in hardware in Transputers. There was also the Helios OS, which I've written about recently on the Reg.
https://www.theregister.com/2021/12/06/heliosng/
... and the Occam programming language, which was designed for parallel programming.
https://www.theregister.com/2022/05/06/pi_pico_transputer_code/
[Extremely simplified big-picture hand-wavy explanation]
C is not inherently parallel and does not have direct structures for this. The OS has to handle it; transputers brought that right into the language.
But Unix isn't inherently parallel either. It was built for a non-networked minicomputer with text terminals. That's why "everything's a file" and so on.
So, now, because there is so much investment in xNix and xNix code, it's being painfully added into a traditional xNix, instead of everyone moving onto Unix's appointed successor, Plan 9, or Plan 9's successor, Inferno. They have networking right in the kernel: processes can move directly from one network node to another. You can't do something like that on Linux except extremely clumsily by migrating a VM or getting Kubernetes to manage a cluster of nodes running containers: millions of lines more code on top to fix something that was built into Plan 9's kernel...
And which was done in a mixture of hardware and the programming language in Occam on Transputers.
Some of this is being reimplemented, slowly and painfully, in Go and Rust and things... in a much more difficult, complex form, in code at least 1000x bigger and more complicated and less capable and less flexible.
Nature demonstrates that the way to build a big computer is from extremely large numbers of small slow computers with lots and lots of communications links. Instead, we're building very big, hot, power-hungry computers, with no direct support for comms between them. So it's hard to communicate from one core on one chip to another core on another chip. They don't scale very well. The only way we've found so far is big clusters, with lots of computers all chewing on different bits of the same data set and not really talking to one another much.
In a lot of ways, because late-1980s OSs and languages didn't natively do hard stuff like multiprocessor support, clustering, memory protection, all sorts of stuff that FOSS xNix and Windows NT finally solved using minicomputer technology in the early 1990s, back in those days very smart people worked hard on coming up with incredibly clever fixes for the problems of parallel computing and so on.
But it was never mainstream and didn't catch on.
Then a decade later this became easy and mainstream with modernised 1970s tech, and the industry lost 15-20Y of progress and went backwards.