Re: Rusux
OS
There is a Rust OS - called Redox. It's another Unix-like OS, in that it behaves like the Unix standard requires it to. It's kernel is written in Rust too. It's no where near production ready, but what is notable is the timeline. That project went from nothing to having a running user desktop in 3 years flat. That's a pretty rapid rate of development. This bears out one of the oft-cited aspects of developing in Rust - it's faster. This is simply because one is not spending time debugging bugs that are down to memory faults.
Rust is creeping into Linux in the kernel, and may or may not eventually displace C.
Userland
Rust is creeping into the Linux userland too. There is a Rust re-implementation of the standard gnu coreutils (things like ls, cat, all the standard cli tools). This project "uutils" was started by some in (I think) Mozilla who decided it was high time he learned Rust, and chose this as a learning project. It blew up from that self-teaching start. The interesting thing is that the end result is an improvement, not just a slaveish translation of the original C.
Hardware, and a Very Big Bet
Rust has an interesting facet - "Fearless Concurrency", which may have played a part in that. It was used in Firefox's implementation of CSS - which is why it's so fast. I mention it because there's an important aspect of Rust that may become very important in the future.
Rust's syntax means that the compile knows for sure the ownership of objects. Can a piece of code change a piece of data, or read it? It knows this. This means that Rust can easily implement CSP, i.e it implements "Channels" much as golang does, but doesn't need a garbage collector. The important bit is, how is this implemented under the hood?
I think that the Channels are - at present - implemented by the compiler as shared memory (in the sense that the receiving thread is accessing memory written to by the sending thread). So far, so very Symetric Multi Processing / shared memory / computers as we know them today.
However, there's no hard and fast reason why the compiler should do this, other than that is the hardware architecture upon which most code in the world is run today. And the thing about this hardware architecture is that, today, it truly sucks. It's the reason we've had Meltdown, Spectre, and all the other cache fault based probings of sensitive data in one program by another. All major CPU architectures have fallen for this, Intel, AMD, ARM, ARMs of various flavours (including Apple).
An alternative hardware architecture would be that all CPU cores are entirely separate, with their own memory controller, L3, L2, L1 cache, joined together on a "network" allowing rapid exchange of data between cores via only that network, and not via L3 cache. No shared memory. Data gets transferred between core only if the software explicitly does that. We've had real hardware architectures like this in the past - Transputers - and back then they looked like that' what we were going to have to use. The Cell processor in the PS3 was also like this internally (between it's SPEs). The problem today is that it's utterly incompatible with all current operating systems, and most multithreaded software (except for golang, Rust, Erlang).
This is where Rust could come in. It could take the code written to use "Channels" and implement that across real, physical channels. The source code need not change for building on a hardware architecture radically different to today's SMP. And that'd make it a lot easier to ditch problems with shared caches.
Today, hardware is stuck with SMP because of the software, and the software is stuck on SMP because no one wants to re-write it. The point is that if a re-write were to happen and it was done in a Rust-channel way, so long as the compiler was changed the hardware could also change, but the re-written source could stay as is.
So, I think there is far more at stake in this Linux/Rust question than first meets the eye, and it's probably the case that even today's Rust efforts in Linux aren't looking far enough ahead. A kernel written in a way so as to be largely independent of whether the underlying hardware is SMP or more Transputer like would be a very, very potent asset. It wouldn't help the software it's hosting unless that too was re-written in a strictly go-like or Rust way.
However, I fear that what I'm suggesting above would take massive coordination between many different organisations, and it's effectively socially impossible. We can't even get simple decisions about wrapping APIs written in one language in a way to benefit another without some unholy row breaking out.