Eh?
So we're measuring computing capacity in watts now?
Amazon Web Services on Monday announced a plan to build 1.3 gigawatts of compute capacity in new datacenters dedicated to serving the US government, at a cost of up to $50 billion. The cloud colossus says it will break ground on the facilities in 2026, and that its new bit barns will add “AI and supercomputing capacity across …
I read about this recently (which it might be a recent thing, or not) and wondered the same thing. What happened to Tflops, or polygons (as these are graphics processors at heart). To me talking about GW etc doesn't adequately compare the performance per watt of different chips, nor allow for more power efficient chips in the future. Anyway, maybe they should talk about tonnes of CO2 emitted to make the point hit home better.
Yes, its a handy metric, but doesn't seem a meaningful one.
I guess this follows the recent trend on public-private partnerships for the deployment of large computer systems, potentially disaggregated, heterogeneous, and composable, but with existential challenges too. The upcoming Solstice and Equinox partially fit this notion imho. Not too sure that's a good deal for the likes of HPE/Cray and Atos (as far as HPC).
Hopefully some consideration of lock-in and a properly practicable off-ramp were made before jumping in with both hooves and trotters ...