1) My ZX Spectrum boots in under a second without anything approaching even 1% of NVMe speeds. Boot time is not an indicator of performance. I used to be able to boot old PCs just as fast from hard disk (if you ignore the BIOS memory check), it doesn't mean anything. And even then, it "needs" NVMe to boot that fast in the first place.
2) Microkernels suffer from poor performance because of the sharing of data and simultaneous data access between all the different subsystems. Memory gets very contended and the system just can't perform as well. This is quite literally the MINIX vs Linux argument all over again. So it might boot on a sixpence, but it could be dog-slow after that.
3) Kernels, and drivers (which are literally stated as one of the largest areas of Linux) require direct memory interaction to function properly. This is an unsafe operation. A bug in a driver or kernel at the wrong place means crashes. BSD is god-knows-how-old and you're still complaining about crashes in its drivers. How long before a Rust OS that has to separate all the safe vs unsafe parts out and put all the *same* kind of checks around the unsafe parts to make them "logically" safe (but not guaranteed safe to operate on) will be able to compete with hardware support, speed and everything else?
4) The "as a file" scheme is not a problem. Why is having a device show inside the device tree which is hosted on the device a problem? Of course it has to be... it's a device itself, therefore it's in the device tree. The device tree, however, is not located inside the device (the root / is not on your hard drive... it's a virtual root in RAM. Of course you can overlay mount over it, almost everyone does, but then /dev/ is a virtual device that's not on your hard drive, and that's what holds the actual drive). There is nothing stopping you having a virtual root, with /storage which contains all the drives and /devices that contains all your devices and thus eliminate any nesting in seconds. You don't because... why would you? It causes no problems and in some circumstances can come in real handy.
It's another ReactOS / MINIX, from what I can see. And self-hosting is a milestone, sure, but it shouldn't be that hard. If you have an OS with drivers, GUI, booting, etc. then self-hosting is really not very much a step at all. Horrible to write the initial bootstrap compiler but there are projects for that. First, you write a micro compiler, simple enough to do by hand and very feature-limited on purpose, then you write a mini compiler in the micro language, then you throw something like tcc at the mini compiler, which gives you a full compiler but without all the bells and whistles, from which you can then make anything else (including gcc).
If you have a full compiler project already written, in a C-like language, that you have control over, it's just bootstrapping to greater and greater functionality (and likely memory safety! The first micro compiler won't be very memory safe at all but it won't matter because you'll be writing precisely one program in it that you want to interact at the lowest levels). I think it's an actual problem that it's taken that long to bootstrap a working Rust compiler.
You've gained nothing security-wise, for unspecified performance, on a niche system, when you could have just Rust-ised the entire userland of one of the common OS and then started Rust-ising the driver layer of said OS. You'll get to the same cliff-edge where you lose all memory safety anyway, but you're not re-inventing the wheel and could smarten up the kernel/user divide of working OS along the way.