Re: 400G
You're talking apples and oranges. While Ethernet may have that kind of bandwidth, it doesn't mean that a single point on the network can use that bandwidth efficiently. Ethernet is also not very efficient and there are a bunch of factors in handling Ethernet devices. It doesn't mean that Ethernet is inferior but you're comparing a network interface to a block device interface. Block device interfaces running a SCSI command set (think SAS, Fiber Channel and even InfiniBand) usually have much higher functional performance than ethernet but also have other limitations that are similar like latency. Also, NVME devices don't require bus interfaces like SAS in order to be used. NVME devices can be connected as direct PCI-E devices and perform at that bus speed. They do often get connected to array controllers so they can be grouped together for other efficiencies such as RAID sets, etc. but that's not always the case.
Mainframes aren't what you think they are. Fundamentally, they use most of the same technologies in use by x86 platforms today. In fact, stop using the word "Mainframes" like it means some abstract idea that is super powerful. The current "mainframe" is the IBM Z16. The Z16 runs processors based on the Telum platform which operate at around 5Ghz. That clock speed is meaningless however because, as it relates to workloads, it's just a pile of clock cycles that the scheduler uses. In the end, it doesn't matter because your "mainframes" use the S390 instruction set that is a parallel processing platform. It's Big Endian. Porting your workloads to a proprietary IBM platform makes no sense in the modern world.
Even if it might, widely scalable networks using that amazing Ethernet bandwidth you just touted makes it obsolete and unnecessary because scale out architectures tend to offer better performance than strict scale up architectures unless you have a very special use case that needs massive performance vertically. Oh, wait, the mainframe sucks at that too.