In other words...
...it's annual losses based on this, is roughly the GDP of Bahrain...and a bit less than that of Tunisia.
Microsoft reported earnings for the quarter ended Sept. 30 on Wednesday after market close and buried in its financial filings were a couple of passages suggesting that OpenAI suffered a net loss of $11.5 billion or more during the quarter. Let's look first at page 9 of the official earnings filing with the US Securities and …
This is an important question. Shame on El Reg for failing to address it.
I did a search for 'banknote dimensions' and got a thickness for a US dollar bill of about 0.15 mm. I'm lazy, and will assume as a Fermi estimate that a Yank dollar's thickness is approximately that of a UK banknote. A £5 note is 125x65 mm. So a banknote has a volume of about 0.8 cm^3.
Taking the average of the size ranges for a double-decker bus from this site, we're looking at 12x2.4x4 m. Assume only about half the bus is available for filling with banknotes, and you get a volume of about 29 m^3, or about 29e+6 cm^3. So, you can fit about 36 million banknotes in a double-decker bus, or £180 million in fivers.
Right now, £1 = $1.31. So we're talking about roughly £8.8B... call it 48 double-decker busloads of fivers.
...and then they're all so full of cash, you can't even get on them anyway.
(In fairness to El Reg, I should note that the double-decker bus is a unit of length, not volume. Using it as the latter could result in confusion, of the sort that happens with an "ounce" being a unit of either weight or volume.
Further to that, I see at the above link that the Standards Bureau has pegged the length of such an object at 9.2m, not 12m. So my result of 48 may have to be suitably increased.)
"I'm lazy, and will assume as a Fermi estimate that a Yank dollar's thickness is approximately that of a UK banknote"
UK banknotes are approx 1/11th of a millimetre thick. A stack of £20 notes is £220/mm, or £220million/Km, which last time I looked was less than the cost of building HS2.
Between 26-29 double decker busses stuffed with fivers!
Some sources say 25,000 $ bills per cubic foot...
Double decker bus holds about 100 cubic meters
That's 88m bills per bus
So approx $441m per double decker bus
Giving about 26 buses.
Other sources say a billion $1 bills has a volume of 1250 cubic meters, So assuming $5 bills, that would need approx 29 double decker buses
The problem is that those busses would collapse under the weight of the load of 88 tons of cash -- more than 14x their maximum load...
That is the single good thing about shareholders : at one point or another, you have to come clean about where the money is going.
Even the FBI can't get that information without a warrant, and PR is completely unreliable in the best of circumstances.
But file a quarterly report that goes to shareholders and you're screwed. There is no wiggle room. The numbers are the numbers and, at that level, shareholders know how to read them.
So far they've been placated by M$'s multiple sackings of their workforce, but given there's a finite number of employees that'll come to an end eventually. I guess at that point the execs will have to sack themselves and make ClippyPilot the CEO.
"The shareholders may beg to differ. Rich people are rarely happy to see their money pissed away into a money pit."
US and especially tech sector share prices have been divorced from financial reality for many years now, so shareholders get wealthier not through dividend paid out of profits, but through share price appreciation. Because that is the case, MS shareholders get wealthier by MS throwing money at tulips.
Hey Microsoft, I'll happily piss away 1% of what Open AI are, and I'll do it whilst creating nothing that people want or need (just like Open AI), but I'll be 100% less polluting! Think of the headlines! Microsoft goes green! 100% less emissions for the same lack of profit! It'll be huge!
Call me and we'll make this happen!
The problem for OpenAI is that plenty of people want ChatGPT, based on its usage metrics, but very few people feel like they need it enough to justify paying the full cost of the service. Hence the losses just keep on coming.
I've said before that I don't find this stuff completely useless - there's been places where we've seen genuine productivity gains at work from rolling out both Copilot and other AI tools - but the problem for big tech is that they're still subsidising these services to make their use economical, and if they stop, they'll likely find most customers unwilling or unable to pay for them at full cost, let alone at a price point that returns a decent profit margin. Even where we're finding it useful, we're being careful not to make it indispensable, precisely because we're well aware that there could be a rug-pull at some point, and in that situation, we need other options. Any business that's making itself totally dependent on these services is likely in for a rude awakening when the bubble bursts.
You don't have to be a very large part of the operating expenses to make out like a bandit with these sorts of numbers flying around.
A further anecdotal metric: my unsolicited LinkedIn contacts from AI firms seem overwhelmingly to have pivoted from trying to hire me to trying to sell me services. Which has not affected my response rate in the slightest.
They already spent (who knows how much: a really huge amount of money) for “training” of the A.I. model. That cake is baked. The model already works.
So what is the new money buying? Oh ok, “more training,” but on what? And to what end? The LLM is already pretty good at answering pretty much anything asked of it, you know, so long as it’s only asked for “interpolative results.”
I’m just curious what are they expecting the new LLMs to be capable of, that they’re not already capable of. Because if they’re just churning the same fundamental technology, it really won’t have any greater capability to speak of. Also the future LLM will still be just as liable to “make up” fake answers and to gaslight as it already is.
(Starlink, yeah continuous investment makes sense. Their satellites are falling out of the sky every day now, they must maintain the launch schedule before their “constellation” vanishes completely!)
They are betting that they can sleep walk everyone into disclosing all their personal data and what they do in realtime. That's what they will need the compute capacity to "model" aka monetize. But I don't believe it will ever compensate for the amount of cash burned on the AI altar.
Exactly! We're approaching something. If you redefine "the singularity" from its previous apocalypse-adjacent meaning[*] to mean "the absolute best result as an LLM can ever hope to achieve," then we're basically approaching it, and always will be, because it's an asymptote. We can put exponentially more processing power into an LLM, but the training data is finite.
"AI" of today asymptotically approaches being a perfect search engine that makes stuff up if the answer was never posted to reddit. And it has no idea whether the answer is real or made up, so it can't even warn you.
The "apocalypse" analogy runs deep. The singularity will always be just around the corner.
* singularity: the point in time when machine intelligence reaches to the point where human intelligence is no longer relevant and change happens so fast that the speed of development is effectively infinite, such that the rate of technological progress is nonlinear in the seminconductor sense, similar to reaching the event horizon in a black hole
A loss of $3.1B out of a gross profit of $30.8B is 10%, and that is not "easily absorbed" even by Microsoft. If you had told investors mid-quarter "hey, our profits are going to be 10% less than they would have been without OpenAI", do you think they would have said "this is fine"?
These are seriously big numbers, and they are getting bigger, not smaller.
We are forced to use Copilot at work. I know the suggestions from Copilot are flawed, but I used them anyway. Then issues are discovered and my boss needs me fix the problems. I simply junk the AI gobbledygook and used the solution that I would have used from the get-go without Copilot. But this way, to my manager I appear to be indispensable. Thank you Copilot.
I'm intrigued and faintly apalled by this statement:
> We are forced to use Copilot at work
In what way, and how is that enforced? I mean, do you get into trouble for writing your own report | recommendation | memo | Post-It Note? I can think of so many dystopian scenarios that I'm not going to query each one explicitly, but I would be interest to know some more detail, either from Mk10 or anyone else being force-fed on AI output.
I'm a different commenter but can speak to this a little.
At work we're not officially forced to use AI.
We're very strongly encouraged to use it, but have been told that we're free not to so long as our performance "doesn't suffer" from not using it.
Essentially, they've re-pegged what they think the level of output should be based on a finger in the air and set performance expectations accordingly. People who don't use AI probably won't achieve that level (at least not without burning themselves out) and so won't get their performance bonus.
I'm a manager though, so I've set individual goals to something more realistic to make sure the staff don't come a cropper because of leadership's unrealistic expectations.
After upper manglement determined we should be making use of AI and mandated its rollout some of our departments held competitions with prizes for the best use of copilot, none of the uses were actually, umm, useful, it really is a poor solution looking for a simple problem.
Once again, this proves the age old addage, garbage in, garbage out.
Electronically summarising everything that has been written or spoken is not the same as having read, considered, filtered and summarised the information.
Humans can be good at that, but the "use AI, accept the output" means that the next generation will be worse at it, as they won't have been practicing critical thinking.
Mike Amundsen: “In the winter of 1636, a single tulip bulb sold for more than a canal house in Amsterdam. Bidding wars broke out in taverns.”
“Promissory notes for rare bulbs were traded like startup term sheets, with about the same level of scrutiny. One bulb, the Semper Augustus, reportedly fetched 5,500 guilders, close to two hundred thousand US dollars in today's money”
The First-Mover Advantage in AI: Opportunities and Challenges
“The first-mover advantage refers to the competitive edge gained by the first company to enter a new market or adopt a novel technology. In many cases, being first can confer significant benefits, such as establishing market leadership, gaining brand recognition, and creating barriers to entry for competitors. Companies that are early to market with a new technology or business model can set industry standards, influence customer expectations, and build a loyal customer base.”
"Companies that are early to market with a new technology or business model can set industry standards, influence customer expectations, and build a loyal customer base."
All this is predicated on having a 'product' that is in demand because it 'does' something the customer wants.
'AI' 'does' ONLY if you are asking the 'right' sort of question, have the 'right' sort of data trained into the 'AI' and are 'holding it in the right way' etc etc.
The ground the 'Early Starter' company is standing on is moving all the time as 'New' hacks and tricks make 'AI' better and/or worse according to which way the wind is blowing this nano-second.
'AI' is in a permanent 'quantum superposition' like state ... it works as far as the vendors are concerned and doesn't work as far as anyone else ... at the same time !!!
The losses are so far off the scale that nobody can make sense of the drive to keep spending at this rate.
Win or lose the money is going to be clawed back from somewhere and we know who is ultimately going to be the one to pay !!!
The Tech behemoths behind the 'AI' craze are too big to fail, the whole world runs on their global computers that support the global economy.
Our governments are just puppets, they can make threats and demand fines/penalties BUT any Tech Behemoth could 'pull a plug' for a few weeks and bring down a government ... by accident of course !!!
:)
https://thefinancialbrand.com/news/artificial-intelligence-banking/why-95-of-enterprises-are-getting-zero-return-on-ai-investment-191950
https://www.businessreport.com/article/has-investment-spending-on-ai-gone-too-far
https://www.firstlinks.com.au/simple-maths-says-the-ai-investment-boom-is-doomed
https://www.reuters.com/commentary/breakingviews/ai-investment-bubble-inflated-by-trio-dilemmas-2025-09-25/