Re: The problem isn't the Cloud, but poor monitoring
Sorry, I think the BS is yours.
There are specialist third-party (not provided by the clouds themselves - that would make no sense as no-one would trust them) cloud spend monitoring and optimisation tools. Some of them are expensive and indeed only make any kind of sense for very large cloud estates. But you can do a great deal with the standard, built-in, essentially free ones.
On reversing out of the cloud: if you generate truly epic quantities of data, that generates some lock-in, but not irreversible. Case in point: Dropbox exited 4 petabytes of data from AWS S3 when they decided they had the scale and capability to build and run their own private storage cloud.
More importantly, and similar to any proprietary vendor including any on-prem ones, there is substantial lock-in if you go for proprietary high-level services as opposed to lower level standards-based ones. There are things you can do to mitigate that a bit (Kubernetes is often one aspect of this), but these tend to increase complexity and unpick a number of benefits of going to the cloud. Essentially, you end up trading potential long term costs of lock-in against short term increased build costs. It's not a new problem, nor is it cloud-specific. The right answer usually depends on how fast you have to get to market.
I've spend a fair amount of time looking at DR for both on-prem and cloud-based services in a fair number of companies, and from a practical perspective DR for cloud based services tends to be way ahead in my experience, because the clouds make it really easy to implement setups that provide (largely) automated resilience to outages affecting a smallish geographical area (e.g. tens of miles). On-prem DR is often a shitshow on very many levels. And the clouds do effectively provide tape or equivalents - S3 Glacier is backed by it, at least the last time I checked. They won't, of course, be tapes in your possession though, which is I suspect what you're fulminating about.
The one type of disaster that many people building on cloud do not address is wholesale failure of the cloud provider for either technical or business viability reasons. You have to take a view on how likely that is - the really big cloud providers seem to isolate availability zones pretty well nowadays (much better than enterprises - one I reviewed forgot to implement DR for their DHCP servers FFS, and it took a power outage for them to notice). The top three providers are probably too big to fail. If they got into trouble as businesses, the government would probably backstop, not least because they don't want their own stuff falling over. But if you want to mitigate - just sync your data back to base. There are lots of patterns for doing so.