
Stormy Daniels
Nicely illustrated at the top of the Nvidia Blog Post (under the "unveiled" link, 3rd line of TFA). And, expected to be economical as well (from the Blog): "lifesaving work, which previously cost nearly $3 million on CPUs, can be accomplished using about $60,000 on a single system with an NVIDIA H100 Tensor Core GPU" -- (or from Tobias) "far less costly to run than CPU-based compute clusters" -- if it works reliably of course, in Taiwan and elsewhere.
The Blog links to the Stormcast 150MB pdf preprint (lotsa figures in appendices), and so you know what I'm thinking, right? Well, here it is: Should generative AI's stable diffusion be expected to be capable of accurately predicting the unstable nonlinear phenomena observed in the atmosphere? Wouldn't "unstable diffusion" be preferrable there (if it exists)?
In the classical k-ε model, the eddy viscosity (μt ∝ k²/ε) gets singularly infinite where the rate of dissipation of turbulent kinetic energy (ε) somehow vanishes, resulting in realistic flowfield mayhem. Might genAI need unstable diffusion to produce a similar walk on the wild side? (just a thought -- not an expert ...)