Pretty difficult to see how something like AWS Lambda + Aurora Serverless V2 doesn't give you "automated dynamic resource allocation from a very large common pool with charging based on actual usage". Or Azure Cosmos DB Autoscale and Functions. Or Google Kubernetes Engine with Autopilot. Etc. etc.
"Government digital service goes titsup on launch" predates cloud technology and is more a comment on some gov IT environments. Selecting the lowest cost supplier, insufficient or unrepresentative nonfunctional testing, extreme political pressure to go live too early etc. etc. are technology-neutral, time tested recipes for failure. A surprising number do actually get it right (or at least right enough), but obviously you tend not to hear about those ones on the news.
That said: if load spikes fast enough, all of the pooled available cloud capacity and fancy autoscaling in the world won't help you. It's fast but not instant to scale, and if you're not prepared to tolerate some user-visible errors during that time, the only answer is a bit of overprovisioning.
But scaling is much faster in cloud than (at least some) on prem environments. I have worked with more than one enterprise with capacity management so poor that rollouts have been delayed while a new datacentre hall is completed, or kit works its way through the supply chain, or while a hunt is carried out for VMs that can be killed and dedicated servers that can be pulled. In cloud, at least the wait is generally measured in minutes not months.