Quantum computing bubble popped.
'AI' bubble next.
The quantum computing industry seems to be just as delicate as the qubits it relies on. Shares in some publicly traded QC companies saw steep declines today, following Nvidia CEO Jensen Huang's CES rather reasonable remark that practical quantum systems may still be 20 years away. Speaking at a financial analyst Q&A session …
I can't wait for the AI bubble to burst because the hype is driving me mad!
Everywhere I look I see "A.I. enabled" even it hasn't the faintest connection to it. AI PC's being touted by Microsoft as the next "Must own", many companies claiming their AI will allow employers to fire all their workers and make unheard of profits etc.
It bothers me that employers are primarily interested in AI to fire all their workers, not caring about the social effects. And even though many Think Tanks have already predicted that AI will not decrease the number of jobs but in fact will lead to job creation.
Because all three are fantastical distractions from what actually works: Computing, Nuclear Fission, and er, Intelligence.
Fusion is, ironically and despite being "always 50 years away" on Earth, the only one that has proven to be somewhat practical*
And no, I don't think an LLM is going to come up with a useful Quantum Algorithm anytime soon.
* (only if you count Solar power as Fusion. But Earth-bound Fusion fans tend to forget that old Sol has approximately the same power density as a common, garden compost heap, <300W/m3, or much less depending on terms. It's just fucking BIG)
> Because all three are fantastical distractions from what actually works: Computing, Nuclear Fission, and er, Intelligence.
Well, one out of three ain't bad (you pick the two that don't -- I'm not stepping into that snakepit).
It is a bit self serving to say "that thing we don't do isn't going to happen for a long time" when the same might be said about an AGI. I'd say that's much further out than a quantum computer capable of breaking current encryption, and while neither are 100% sure to happen I would put a lot more faith in a working quantum computer being "possible" than a true AGI using at least while LLM technology. I'd say the prospect of the latter is negligible, no matter how big the training set or how much computing power they throw at it.
True AGI isn't even well defined. How many angels can you fit on head of a pin? However ML, including LLMS, are already at practical working tool status today. That includes positive and negative (mis)uses, from the pov of human well being. QC, on the other hand, apart maybe from that Canadian version, can't even do Hello World yet.
Shares in some publicly traded QC companies saw steep declines today, following Nvidia CEO Jensen Huang's CES rather reasonable remark that practical quantum systems may still be 20 years away.
Not as bleak of an assessment as people are making it out to be...
I wouldn't mind investing now, in some company that's going to be the Intel/ARM/Apple/Qualcom of the quantum computing world in 20 years...
Of course the question is: which QC company will it be?
And yes, it does sound like Nvidia is saying "Stop wasting your money on other companies' empty promises, and start putting it into our empty promises..."
He's right you know. Even with Google's so-called "breakthrough" in error correction Quantum Computers aren't anywhere near running useful applications in the foreseeable future. And horizons of 20 years or more are simply too far out for venture capitalists.
Ergo, the money will start to dry up in the near future. Only Big Tech companies like Google and Microsoft will be able to continue investing in the tech.
The same is true of NVidia's current AI bubble.
Real artificial intelligence instead of aggregate averaged statistical summaries is still 20-30 years away, the same as it has been any time you asked anyone with real knowledge of the subject ever since my 4th year classes in 1986-7, including what passed for a 400-series "Artificial Intelligence" course back then - Expert Systems.
These are "programmed" by changing the interconnections between units and the values of various coefficients associated with each unit (integrators, amplifiers etc). Your "program" is actually a wiring diagram with annotations.
I doubt 1% of any of the Vultures here has ever used one of these things (more likely if you've got some history in control systems or aircraft*).
But IMHO what most here would call an actual computer means something you can control with code.
MS's Q# supposedly does this but I can't make head nor tail of it. Anyone here used it IRL?
My instinct is that designing from the ground up algorithms to use the (theoretical) power will be like designing algorithms to make full use of parallelism of HPC arrays, only worse.. Depending on your PoV Amdahl's_law , Gustafson's_law or Gunther's Universal_Scalability_Law are relevant.
But time will tell who's right I guess.
*Each Concorde nacelle had 13 "computers" running the inlet and the nozzle (Olympus predates FADEC by decades) but AFAIK only 1 was a "proper" programmable digital computer. Ferranti?
When the AI bubble pops, who's going to pay? How many governments will it take to cover Huang's, Altman's et al 'failures'? At a ten billion max people count for the planet... How much per person will this bubble cost? 100 per person at 10 billion persons = a trillion....
Quick! Make more babies!!!
This post has been deleted by its author