They seem to have quite a few assembly problems
I found Anastasi's comments on this (as usual) quite illuminating.
I may not have her chip engineering background, but I do know materials and it does indeed seem to add up to a bit of a challenge.
Nvidia CEO Jensen Huang has attempted to quell concerns over the reported late arrival of the Blackwell GPU architecture, and the lack of ROI from AI investments. "Demand is so great that delivery of our components and our technology and our infrastructure and software is really emotional for people because it directly affects …
I found Anastasi's comments on this (as usual) quite illuminating.
I may not have her chip engineering background, but I do know materials and it does indeed seem to add up to a bit of a challenge.
Actually, production is the problem - it's a Lego box of bits, each with their own expansion coefficient. That's OK when you test it, but pesky buyers want to actually use it, and using it makes it heat up. Old-fashioned thermostats may have been designed to bend, but most modern chips are not and neither is Blackwell. Well, OK, technically that means production isn't the issue but actually using it, but you get my point.
It's actually an achievement that they managed to make this work at all, starting to sell this before they had a stable high yield was possibly a tad premature..
Might I, in middle-management-poorly-educated-really, rephrase that as ...
The demand will not burst. The bubble wont either. Five years ago, yes. Now? If your VP isn't using their own LLM then walk.
AI will rebrand. New SI features will creep in. Your local LLM will use SI to interact with other SIs but the core will remain AI for a few years as it is easier.
"¬hat will happen to demand once the AI bubble bursts?"
Sell NVidia. they will survive on the Apple-model of just being so loaded and ruling with disdain. They have done so well on this bubble and their stuff WAS the best. Now it is just more of the same for bigger prices.
We don't want these GPU drinking leccy like a fish drinks water. Should he even be driving? It is a bastard method of non-organic intelligence.
We want temporal archicturecutres. Not the smae old same old.
When building your models, thin of GPU as an overflow pipe on the sink. Focus on the NPU really cause of the savings. Avioid models with W/Bs ober 100g for home use. Not worth it and using lots of power.
Most will be comfortable with the 8gb W/Bs.
When/If TheBubbleBursts then {
> Pickup CheapBits&Bobs (for, Home, LLM) && If YouCareAboutNVidia then {
>> Don't;
> END_IFS&BUTS. }
} // End. Will LLMs ever be able to edit to our level, I ask!