The AI bubble just got a lot bigger
and like a giant bubblegum bubble it makes a hell of a mess when it bursts.
"Cleanup on Aisle 6"
The headlines say OpenAI on Friday announced $110 billion in new investment from Amazon, Nvidia, and SoftBank at a $730 billion pre-money valuation, though terms and conditions apply. Both Amazon's $50 billion and Nvidia's $30 billion investments are tied to massive customer commitments by OpenAI and its partners. Of Amazon' …
Given how they're all tying themselves together in these stupid circular deals with little new/real money behind it, it's not going to take much of a problem in one company to screw a whole stack of players.
And best of all so many of them have got themselves into a position of the AI bubble being the vast majority of their business.
Both Amazon's and Nvidia's investments are structured in such a way as to guarantee a return on every dollar invested in OpenAI. The funding is essentially a discount on compute infrastructure that doesn't dilute their revenues while driving up OpenAI's valuation.
However, both are spending massive amounts of money on datacenters and compute (Traniums, Vera Rubins) - isn't that true? I guess taxes are paid on revenue but not investments.
Reminds me of a now politically incorrect story from my childhood.
"The tigers are vain and each thinks that it is better dressed than the others. They have a big argument and chase each other around a tree until they are reduced to a pool of ghee (clarified butter)."
These would·be AI tigers will be reduced to oceans of insolvency rather than the pools of ghee.
Agentic AI has basically killed cloud AI. I don’t want a model which knows things. I want a model that can figure things out.
Agentic AI only needs enough knowledge to be able research more. We don’t need 120-billion parameter models. We need models trained on how to find information using tools.
These models will get smaller and smaller as we optimize them further. I also think that training will be much faster and require much less powerful hardware.
Step 1) train a model to learn things by searching the internet.
Step 2) train the model to learn things using vision
Step 3) give the model access to tools. This tools can be compilers or computers or a robot with motion and multiple senses.
Step 4) teach the model to ask questions.
I expect these models to be small enough to run on current generation mobile phones or maybe early next generation. I know Apple has put quite a bit of AI compute into their phones. I am not sure whether Google standardized APIs to support AI on device.
But what I am absolutely certain of is that OpenAI’s business model is bigger bigger bigger. Local AI is still a bit weak. We’re approximately where ChatGPT was last year on small self-hosted models. But I expect Microsoft to release a substantially more powerful Phi soon. I am hoping it is an age tic masterpiece. If nothing else, I hope Microsoft and Google open their APIs to allow third parties to build their AIs into Microsoft and Google’s agents.
I actually think Microsoft should have a legal responsibility to open their APIs to third party AI models. I would love to use Phi or Deepseek locally hosted. The ultimate design I can think of is Copilot with the options for “click this to choose your local model” and “click this to choose your cloud model”. Then, I’d be able to ask for recipes and stuff from my local model. And then I’d be able to pay for capacity on the cloud for problems too big or too slow for a local model.
I really just don’t see how OpenAI fits into the future. They just haven’t made any innovations this past year and I feel like they don’t even realize the world has moved on.
If you want to see what I mean, buy a few tokens on a Kimi K2.5 provider. Then test it without agents and then try it with agents. Without agents it’s insanely powerful. It’s like ChatGPT. It’s a completely open source 1 trillion parameter model. And if it knows stuff, it’s amazing. If it doesn’t, it’s stupid and makes stuff up. When you enable agents, it is able to answer things it doesn’t know the answer to.
Now try the same using a locally hosted model like mixistral. It’s garbage without agents. With agents, it’s better than ChatGPT 4o. And this is something you could actually run on your phone… without the cloud.
What happens to all the cloud companies when you’re as good as ChatGPT 5.3 on your phone or laptop without a cloud provider.
This isn’t the distant future. I expect this in 3-6 months. The race is on and if you try Claude code with Mixstrall, you can tell that we’re very very close. In fact, I think it would be so cool if Anthropic tried making a local model where I could pay them $10 a month to always have their latest locally hosted model. I would pay for that. Outside of testing, I will never pay for tokens for a cloud service.
Usually when you order wafers you pay upfront. Then in 2-3 months they leave the fab and get assembled.
For all of these announcements has money (cash i'm an old fashionedd guy) changed hands?
Have memory/disc companies got the money,the plant is at 100%,the goods are going to cloud companies?
Can data centre peope get more/replacements parts easily?
Show me the money.