Re: AI datacenters that require 230 kW of available energy per rack.
I'm guessing this is overall power demand for the DC, including cooling, losses in the UPS, redundant supply loops, etc - not the actual per-rack supply. Overall budget per rack.
The biggest, baddest AI cards at the moment (H200, B200 type stuff) are 1kW.
The 6U Dell PowerEdge XE9680 can host 8 such GPUs (e.g. H200). So that's 1.3kW per U just for GPU, nevermind CPU, RAM, storage and networking.
The 2U Dell PowerEdge XE9640 can host 4x 700W GPUs (H100), so that's 1.4kW per U for the GPU.
Of course, realistically all those ancillaries are unlikely to take the total past 1.5kW/U - the GPUs are the stars of the show.
But if we take a Joseph Bazelgette "only-doing-this-once" approach and assume we need (or may need in the future) 3kW/U once we include CPU, RAM, losses at the PSU, etc, etc then that still only gets us to 126kW per rack (assuming standard 42U racks, which a hyperscaler doing a custom AI-DC build might not be...). Which is just about halfway there. This potentially makes sense if you have two redundant power supplies to the building (whether that's distinct grid connections or grid + local generation), each of which must be capable of keeping the lights on. If you need ~120kW per rack, then you budget ~240kW per rack of supply to the building, but you'll never pull that. You'll be pulling half that, and if one side falls over, the other side can manage alone.
Also, the guy is in the business of building out new DCs for clients. He doesn't want people converting existing buildings. So his is an informed - but biased - opinion.