Alex Savage doesnt seem to have a clue what he is talking about
Unless you're a tier one enterprise who can piss money and pride up the wall developing your own LLM's the only game in town are those LLM's from the hyperscalers, for which re-training and fine tuning are very limited options anyway. With a bit of prompt engineering they are pretty good - certainly for the general stuff you're likely to throw at them via Snowflake.
The actually cost of running the hyperscalars LLM's are buttons too - only the artificial price points set by the hyperscalars is a consideration. Anything from with GPT3.5Turbo is peanuts, GPT4 is still expensive but expect that to come down as the competition builds....
(Theres some interesting low resource LLM's emerging on places like Hugging Face, but Consumer grade LLM's from the hyperscalers is mostly where its going to be at unless you've got a big Data Science team struggling to stay relevant.).