Lazy terminology -> bad results
Calling an AWS shop that uses GCP for analytics "multi-cloud" is nonsense. I mean, if you want to make the board happy, I guess. To be meaningful, multi-cloud needs to be talking about having an application running production loads in multiple cloud providers. If you're even trying to be serious, that means having DNS resolve to load balancers in multiple clouds, with the LBs in each cloud capable of routing all traffic to any cloud.
Actually doing THAT is not likely to make sense for any but a very narrow slice of businesses. As mentioned, the complexity of getting it right is a substantial cost in its own right. Getting any actual improvement in resiliency means understanding a LOT about the physical locations of the servers (information which, for some reason, cloud providers are reluctant to provide) and ensuring that barn X for CP A is sufficiently isolated from barn Y for CP B.
FAR better to get a deep understanding of reliability capabilities of your provider, and set up resilience using their tools. Your engineers are certainly NOT any better than theirs, and unlike yours, they DO have both the data needed to do it right, and the job description of only getting it right.
Unless you are on Azure. Then you can look for the occasional multi-hour all-systems outages.
But for AWS & GCP, if your business actually needs more than four nines, then you can read up on what exactly it takes to get there. I would think that 99.997% uptime is quite doable with single cloud in either. But if you actually need five or so, it's probably worth looking into building your own over going multi-cloud.