Round and round it goes
Hard to keep up with the Joneses in this race for the biggest AI machine, with Nadella's MS Azure Eagles at 14K H100s each, with 5 of those built per month it seems (72K GPUs/month), Zuckerberg's Meta Grand Tetons at 50K H100 for two, Musk's xAI Colossus at 100K H100s to be upgraded soon-ish to 200K H100/200s, and now Ellison's Oracle Zettascale AI Supercomputer (OZAIS?) at 131K Blackwells (equiv. 210K H100s), phew! ... But only until that Altman/Nadella 5GW million GPU death-star-gate project emerges ... and the whole cycle starts again!
It's great business for Nvidia, but "between 5.2 and 5.9 exaFLOPS" of FP64 HPC-oriented oomph (while likely better than some of China's secret supercomputers) is not very much for a machine the size of "OZAIS". AMD's MI300s would boost that up by 3x or 4x I think (with similar AI performance, except that related specifically to the convenience and performance of CUDA).