It's complicated.
Depending on where you are in the cycle from pre-seed to mature company - costs vary widely.
I'm primarily working for companies who are deploying on AWS. I'm a C++ programmer by background, but these days, it's mostly near-real time messaging application - e.g. large gambling tech companies are the poster child.
There is a lot of on-prem for regulatory reasons, operations must be physically located in the country.
As a pre-seed startup, you can get oodles of cloud-credits to try your stupid idea for "free" infinitely renewable for just picking a new shit company name, and paying companies house 85 quid for a limited.
My hosting on real metal involves real British Beer Tokens being transferred to my provider on a monthly basis, who would utter a hollow laugh at the concept I expect to run on their tin for free.
The horrible truth is you need better staff on prem, and if you have them, you'll be fine on-prem or in the cloud.
If you have a stable and sane architecture - you'll be running more cost effectively on prem. But if you want to bill each business unit for the cost of the compute/services they consume, that's starting to get more complicated.
I run most of my personal infra on dedicated hardware because my needs are stable so it's vastly cheaper than AWS, assuming you value my exceedingly expensive time at zero.
In my semi-defence, I've written most of this and it's stable, so requires little actual day to day effort, it's mostly automatic, but I spent a shitload of time setting it up, and learning. I still think I'm quids in, but if I had to pay me, for my time collectively...
. But at commercial scale, AWS costs are highly variable with Credits, agreements, Reserved prices etc Karpenter is dynamic auction pricing for infra.
So it's quite difficult to workout how much you actually pay - which is why I prefer to pay for OS labour costs. There are solutions which you can stand-up fast, securely and the cost is a rounding error in the profit margin, even at treble the price.
Let me give you an example. I needed to migrate something inside 6hrs. One off job. This 6hrs window cost more than the salary of everyone working on the problem for next couple of years.
AWS had a "network incident" to which I gleefully posted https://blogs.oracle.com/developers/post/fallacies-of-distributed-systems.
This momentary blip caused a 5hour transfer to fail (of course it went smoothly in rehearsals). Approximately a week later that transfer took two hours. Because of Linux engineering (slice and parallelise, with async exponential backoff) the bottlenecks could now be purely IO. Because of the cloud-native position, it was possible to have faster network path and over-provisioned storage path at excessive cost for the one off task, stood up used, torn down, without beancounter approval.
Without being able to make that parallel being on cloud-doesn't help - without being able to deploy better network links now, there is an IO limit that gates my performance envelope.
The AWS point-to-point network bandwidth is pretty impressive. I've worked with a lot of good on-prem people, they are getting older, and we're not replacing them.
That's a mistake in my view, but so is viewing Cloud costs by sticker price.