Looks like you're betting the farm on A.I.
Would you like me to run you a bath?
There tend to be three AI camps. 1) AI is the greatest thing since sliced bread and will transform the world. 2) AI is the spawn of the Devil and will destroy civilization as we know it. And 3) "Write an A-Level paper on the themes in Shakespeare's Romeo and Juliet." I propose a fourth: AI is now as good as it's going to get, …
Just check retractionwatch and spot people who were labelled as exceptional talents and got a lot of funding and nice careers.
And then remove the possibility of retractions and the whole peer review process and let a chatbot calculate probabilities from undeclared (wonder why..) datasets and just step up the coercion until people just consider everything that is being calculated valid. What could go wrong?
AI has limited value as a user interface for search queries, for those who think that boolean is a South Pacific language.
It may have some use in niche circumstances. This will not be especially lucrative.
It is not reliable enough to be built into operating systems or software and should only be an option. That is, there should always be an 'off' button, or better yet, an 'on' button, so it is off by default.
AI has no investment value, as the ROI is negligible and the data centre costs are vast. Essentially, it is a luxury gimmick.
It is also a major security issue and should not be used on any secure, enterprise, government or military system. It is inherently unreliable, it may phone home and requests to it can be mined.
No sane person would invest serious money in AI.
If you do want to invest money, invest it in your security - no public internet access to your intranet, two systems per desk, lightweight/ephemeral data only on any internet linked system, reduce the amount of data you hold, store it on paper if you can. Use simpler software, which will be more reliable.
As for the AI rollercoaster, it was amusing. Time to move on to the next big thing, hopefully distributed systems, including distributed social media.
Nokia was doomed anyway. They missed the boat on moving to smartphones and their pivot to using MS Windows for Phones was a last ditch effort to remain relevant. As far as phones went at the time they weren't actually all that terrible, but Microsoft repeatedly and chronically kept fumbling the ball on their phone OS which doomed Nokia.
"No sane person would invest serious money in AI."
Not sure about that. Maybe not so much money and I think maybe differently. But you are looking at it, in a user context, as someone with a high IQ. Most of the population, for a number of disgusting reasons, not of their making, just want to be told what to do. AI or I prefer LLMs are a great development but they are not a panacea. Also, the crazy investment rush is a big gamble by the oligarchs because if someone really pulled it off they would rule the world. Well until their government sent people with guns to take it over so they could rule the world.
General AI as being built now seems just to be a pattern-spotting system optimised towards human languages. To my mind, optimising towards languages is a very silly way to go other than for demonstration and proof of concept purposes, because we already have very good human language systems out there now. These are known as human beings.
Pattern-spotting on other things by contrast is quite a good idea, but once again this is a specialised tool of limited real-world use.
I think a caveat here is that similar to the way the 'metaverse' tried unsuccessfully to claim games such as Roblox and Fortnite under its banner, the term 'AI' successfully encompassed all of the various machine learning tools that we have had for a long time now and are very useful. When these specialist machine learning systems, image recognition, large language models and reverse diffusion image generators are all being marketed as one single thing, they have been able to ride on the success of some of these systems to pay the capex on others.
I also think LLMs will have a bit more use than you suggest: I suspect a few legitimate uses exist somewhere, but there are also plenty of places where quality doesn't matter. For instance they are great at quickly generating tons of low quality but structurally solid text. There is certainly a market there. I would argue it isn't making anything better for humanity, but there are those willing to pay small amounts to do it and you probably don't need to train the most effective models to support spam. Image and video generation also only needs to be good enough to flood social media. Maybe that has long-term negative effects on those networks (financial effects, other negative effects are a given.) but I think a market persists there for now. I think these markets will persist to some extent.
I know a depressing number of people who use and trust AI when they shouldn't, and they have jobs and lives where they are sufficiently padded from the consequences of their mistakes and misunderstandings that they will simply continue to do so. I am sure that remains a market. Some portion of people will sacrifice almost anything for convenience, Hopefully that market doesn't increase until the people and systems they unknowingly rely on collapse.
Of course the insane valuations and spending we have seen have been based on the idea that 'AI' was going to replace people. That companies would be able to lay off a major percentage of their workforce, and finally downgrade others in white collar sectors to 'unskilled' labor. That is the only justification for the spending we have seen, and I think people are increasingly waking up to the fact that these technologies we have today simply do not do this. So the bubble will burst and the dust will settle and we will see what remains.
"and finally downgrade others in white collar sectors to 'unskilled' labor"
When was the last time you saw a room full of ledger clerks balancing the books?
That downgrading and replacing has been happening for a while. AI is just the latest step along the way.
The irony is that for the most part it's going to replace middle management, not the coalface
Except gamers mostly can't pay the top dollar that GPU makers would like. When the AI bubble bursts, those GPU makers will be struggling to sell anything other than gaming cards, so they'll have to forgo their 60% margins and sell at rates that the gaming market will accept.
As the fizz went out of crypto-mining, along came AI, and that was just gravy for the makers, they thought they'd died and woken up in heaven. But the storm clouds are gathering, and where is ANY high volume business use case for new GPUs beyond the current AI bubble? Even if there is one, it can almost certainly use the billions of high end processors already bought, paid for and installed, so there's a big question as to what the vale of the market for new GPUs will be in say 2028?
Wouldn't want my pension invested in big tech in general, and GPU suppliers in particular.
"Wouldn't want my pension invested in big tech in general, and GPU suppliers in particular."
I'll be changing one of my funds this week. Has done well but I think things are going to pop before the new year. Of course the bastard banks may get it all anyway. Apparently, they are the first creditor to most finance businesses and get first call on money in a collapse. We do not own the shares, we do not own our pensions, we are simply beneficiaries. Scary when you think. So in a collpase the big banks sweep up the assets. No doubt that would include our homes. Just wondering if there is a collapse, what happens to the big guys; Blackrock, Vanguard, State Street etc. Do they go to the banks, although I suspect the banks may control them via obsfuscated routes anyway. There's some weird ownership thing going on among those guys and I'm convinced their priority is ownership and control of markets over and above fund beneficiary interest.
So when a financial disaster happens, the big banks absorb smaller ones, take assets and emerge controlling more of the world. At least that was the last big depression. We need to find a way to stop this, it's not in our interest.
>firstly with bitcoin then AI
It hasn't been possible to mine bitcoin with GPUs for many many years now (ASIC hash rates are just that exponentially higher) - the previous big demand for GPUs was for mining other cryptocurrencies.
GPU prices were indeed originally much lower, even for the highest end models - but more than a decade ago, nvidia learned that the suckers still pay, price increase after price increase.
You would hope that AMD or Intel would compete with nvidia by offering GPUs that work with free software (or at least aren't digitally handcuffed, allowing for a free driver to be written) - but no, those run more proprietary software and are digitally handcuffed and the price isn't much lower either.
The money will be in MESH networking and swarm robotics, with limited and specialised AI that is useful for military purposes. Where the Ukraine has pioneered a path others will follow and sooner or later we're going to see someone building a production line for small, general-purpose attack drones designed to clobber armour or devastate groups of people, depending on how the explosive system is triggered.
The only real question then is which minor state gets the overrun treatment and can the actors involved hang onto it long enough to earn a profit back out of the venture?
>buy a PC without copilot
Or even a mouse! I have a Logitech item that I like a lot, but it has developed an irritating habit of double-clicking when given a single button press, so I went looking for a replacement. At least two of the devices that I identified as candidates have "AI features", as in e.g. users can assign AI shortcuts to the mouse, such as launching Copilot or ChatGPT, summarizing selected text, generating code snippets, or autofilling templated emails. Dammit, I just want something that single-clicks on a single click!11!
I'm not sure I see such direct comparisons to the dot com bubble. Sure the big players are massively overvalued, but hardly anyone else has poured serious money into it. Most have just rebadged old tech with an AI label. So the bubble will pop, but it'll likely be limited to those few companies.
Even then, the ones who have invested in infrastructure like Google and Meta will probably take a bath, but the likes of Nvidia who are cash rich and all their other market segments are still hugely profitable, they'll probably be just fine. Not close to the most valuable company anymore, but otherwise just chug along like they were before.
So I don't think there's much reason to expect a recession from this.
Of course Nvidia will still be rich. They’ve made buckets of cash, and they aren’t stupid.
What will change is that their *future* profitability will drop off a cliff. No way they continue making $10B/qtr selling video cards and Mellanox NICs.
It will still be a business - but a much smaller one.
In a sensible world NVIDIA would distribute that cash to the investors because they're not going to make sensible use of it and go back to being that smaller business and the investors would be pleased with their windfall and realise it was a one-off. In the real world they'll scream and shout at the management.
So you think the damage will be contained to AI bubble stocks? Perhaps, but that's famously what Federal Reserve chairman thought with regard to subprime mortgages in 2007. Some of us out here are skeptical that big time market overpricing is confined to a few AI bubble stocks. Note that the s&p 500 Price Earnings ratio is hovering at about twice its long term average of 15.
No I don't. CDOs and SIVs were the instruments investment banks used to sell off subprime mortgages. By getting the ratings agencies to treat them like they were as secure as treasury bonds they conned most of the market into buying them, including pension funds. Then those were bet against by the same investment banks through credit default swaps.
There is however going to be a lot of spare data centre capacity not to mention all the hardware that they have installed in anticipation of the Big Payday.
Less demand for DC's and the servers that go in them will hurt the hardware manufacturers, The biggest fallout will be confidence and that could cause a stock market crash. These 2 together will certainly cause a slowdown if not a recession in the IT world
Google, Microsoft, Nvidia etc are probably big and cash rich enough to ride out the storm. Venture funds will have to take a haircut (they are used to having to do that). Metaverse/Farcebook? They could be in trouble.
What will be left? some will survive, mostly those that were put together for a specific application and trained on specific, sanitised data. Oh and the cockroaches.
Personally it can't come quick enough for me, the sooner it happens the less pain there will be for all of us.
There, that is my prediction, do I get a prize if I am right??
Or... you sell just before you >think< the bubble will burst, which then triggers a panic and people selling like crazy... and that causes the bubble to burst?
And then, if you're feeling confident, you buy up all the (now) super cheap shares, and hope they recover so you can make another killing by doing the exact same thing... and if this cycle keeps repeating, people will think you're predicting the market and follow your trends without understanding YOU are the reason for the trend.
AKA a self-fulfilling prophesy.
This post has been deleted by its author
Correct, I pulled out expecting the crash, came back in, now pulling out again! If it's your pension most companies wont let you change instantly so you will always lag the market by a few days. So, let's say there's a big shock you aren't going to get out even halfway along the down slope. In fact they will probably have halted trading before your sell order is due to be executed. You couldn't even time it exactly on shares because there is a time lag. This is why the trading pro companies buy datacenters near the exchanges so their algorithms get a couple of millisecond advantage.
Cisco was fine as a company after the dot com crash, in that it was profitable and continued to be profitable. However, it peaked at $82 on 27th March 2000, and still hasn't recovered to that level even today. $82 was pricing in growth that was never going to happen.
I think Nvidia will end up looking like that.
Imagine venture capital firms pouring billions to AI related stuff and reality hits.
They will definitely find ways to get their money back. Preferably from taxpayers like the financial institutions usually do. Unless it is funded mostly by regular people. Then they'll just need to deal with the pain.
So the bubble will pop, but it'll likely be limited to those few companies.
Those companies are half the Nasdaq. The ripples will be more of a Tsunami and we are overloaded with debt. The tech market is probably highly leveraged and the margin calls will spread like a shock wave from a nuke. Probably! Anyone analysed the impact of the Nasdaq falling instantly by 50%?
There are several players who have obviously thrown a lot of money at it, through building data centres specced entirely to serve up "AI" mush. It's obvious because they try to shove the "AI" nonsense in your face at every opportunity, and it's also already obvious that people don't actually want it, or they wouldn't have to try to sell it so hard. The ones that spring to mind most are Meta, Microsoft, and Google, and to be fair, those are all companies that could do with having their wings clipped. At the very least, it might mean that Meta AI, Gemini and Copilot aren't turned on by default and in a way which is hard to disable or otherwise get rid of.
the S&P is 19% FAANG and of them NVIDIA is some stupid amount of that 19%.
since Wall Street is full of idiots and morons, if NVIDIA sell ONE less GPU than the year before, let alone 1% less then it'll be the end of the world & everyone must be fired!!! the growth rot economy as Ed Zitron calls it... must grow must grow must grow!
when NVIDIA inevitably goes, everything else will follow. don't forget that the stock market has no relation to reality. somehow openai is "worth" $300 billion & remember amazons amazing shops that turned out to be 1000 indians watching cameras?
Exactly, markets are not connected to the fundamental realities people think they are. Tesla is the largest auto-maker by market cap, four times the size of Toyota, even though Toyota has four times the earnings, more revenue, more employees, and people don't associate it with a famous creep. Investors care about nebulous future growth potential, and the sentiment of other investors more than anything. Tesla is worth that much because its CEO has convinced them it will solve self-driving cars and make humanoid robots ubiquitous.
Even if genAI has more usefulness than I give it credit for and could continue to prow for years and become profitable, the current explosive spending cannot continue and the second it slows people will pull out. It isn't enough to be sustainable and profitable, if you aren't growing then you are a failure. All this capex in the AI space is flowing to nvidia, and it has to keep doing so in ever increasing amounts or nvidia will look bad. The industry simply cannot justify spending more on GPUs every year forever.
While this is even more speculative: if nvidia starts to sink in price, I fully suspect that it will drag many associated AI companies down too, just because of the vibes. Even if these companies need to start operating without the insane capex spending to ever generate a return. I would not be at all surprised to see a 'correction' on all them the second they stop buying GPUs. It isn't like the current 'fundamentals' look good for many of the AI operations. Maybe Meta's advertising business is making gains with GenAi? Cursor could be making money as long as the owners of the foundation models they rely on don't squeeze them too much?
"Yeah my apartment building is a bit boxy, few windows. But hey we never lose power during a storm and I have a fiber connection in every room!"
(During the .Com bubble I lived in state CT while simultaneously working for two .Com's, one in MD and one in MA (human hyper-threading))
I had a friend of mine who worked for a company that bought out an not-very-odd Telecom facility (I don't recall if it was a victim of the dot-com bust or the migration of telecom to newer tech). It was filled with rows of two-post racks.
They left a lot of the racks in place, and attached drywall to them. Ta-da, instant office spaces! (or, since they didn't go floor-to-ceiling, perhaps better viewed as "medium privacy cubes"?)
This post has been deleted by its author
Yeah, AI is like an intern. Like a high-school student intern who smokes weed in the parking lot at lunch every day.
It's like the New Riders of the Purple Sage lyric ...
"Smoking dope, snorting coke / trrin' t' write a song / forgetting everything I know / 'til the next line comes along. "
Rock on, Sam.
So the air is coming out. I'm waiting for someone to slash the tires.
What will this mean for the dozens of bitbarns that are programmed ? I've got the feeling that the electric grid has a chance of surviving the next decade just fine.
Death to AI, and end of career to all the besuited snake-oil salesmen who charmed the Boards all over into believing in it.
I went from simple code questions, mostly syntax, to getting LLMs to write tests through to experiments with agentic and prompting.
Now it’s a last resort if I can’t figure something out quickly and usually that’s a waste of time, so back to the old tried and tested RTFM and ask questions.
This is with ChatGPT 4.1 and vscode.
I have a couple of short upcoming courses through work I’ll be attending to see if I can any more insights, but I’m near done with AI as it is now for coding, beyond simple automation. Stub out tests, translate stuff, super simple grunt level stuff, because that’s all it’s good for coding wise.
As has been stated, it doesn’t learn. It does draw from prompt conversation context, but that can unravel into ridiculous hallucinations and a total mess in agentic mode. Incapable of cleaning up, even with very specific prompts.
LLMs have their use cases I guess, but only a naive vibe coder considers them capable of creating structured and logical clean code.
I have a few thousand lines of Python code that I've been working on. It's pretty ugly, since I haven't done any serious work for three decades in any language and gave up gainful(???) employment 9 years ago.
I have thought about asking one of the AI things if it could clean up the code in a more helpful way than pylint does.
Maybe I'll try it, It's unlikely to make the code worse...
There are quite a few classes that have significant sections of almost-but-not-quite identical code*. I feel certain that they could be tidied up and made cleaner either by creating a few additional classes to handle all the similar tasks or by consolidating several existing classes into one more flexible bit of code.
*It's a bad self-taught habit I got into in the 1980's. Copy, Paste, change a bit...
Bit too old to go on a training course now!
The one use case I have found is actually the opposite of vibe coding.
We have to maintain a lot of very old, and very poorly structured code, written in a language that is no longer supported by its maker (Foxpro), but which still works, and is still out there providing a service to our customers. I'm sure a lot of other businesses are in a similar position, although few will openly acknowledge it.
Now, of course, we want to modernise and rewrite things following SOLID and using modern architecture, tools and languages, but part of that job involves understanding tens if not hundreds of thousands of lines of code that has evolved incrementally over four decades or more.
Somehow, Copilot does a fairly decent job of reading through all that code, and summarising what it does, even though the files we are feeding it aren't the actual source code files, but text versions that have been fed through a conversion tool (Foxpro stores its source code in binary Dbase tables, don't ask). This helps us to at least map out the functionality of the software on a broader scale, without having to sit down with a pen and paper and make notes while reading through a 3000 line method, which is what we need to otherwise do.
Sometimes, Copilot can write boilerplate unit tests for us, but in my opinion, it's not much of a time saver, because you still need to read through them and verify them, which means you still need to do the same thinking you would do if writing them, and I could have probably typed them out just as fast whilst doing that.
Posted AC, because I prefer not to identify myself or the company I work for, to some of the less salubrious commenters here.
This has been 100% of my experience with the thing.
Since we got a subscription at work to Google's AI service - that's supposedly optimised for coding - I thought I'd give it a go and get it to write a simple little browser game for me, just to see how it did. Now, I've not been a web developer in about a decade, but I have just about enough javascript to be able to do what I wanted myself, but I'd expect it to take me a couple of weeks to knock most of the bugs out and re-learn all the various useful libraries that have massively changed since I used them last. How did Gemini do?
Well, it started out really rather well. It created a nicely rendered globe for me to use as the play area, seemed to understand the concept of iterative geodesic partitioning to provide a mostly hexagonal grid to place the pieces on, knocked out a UI that let me select things and pan and rotate the camera. After about an hour of it I was very impressed.
Then the wheels came off. Completely.
Next step was to partition our sphere into a number of cells each of which would function as a territory to be captured in the game. By creating these cells out of collections of the derived hexagons plus the original pentagons that made up the seed polyhedron we can hide the fact that those pentagons are in there from a visual perspective and make the "grid" appear regular.
As soon as it was asked to do this, the clicky UI stopped working. It'd partitioned up the world nicely, but now I couldn't select anything. "Go back and do that again and put the controls back in." I asked it.
At which point it reduced every territory on the sphere to precisely two cells. "No, that's wrong. Put the territory generation code back the way it was, and then revert the UI code" and of course it profusely apologises, admits it's mistake, and... breaks the Z-index rendering so all the game pieces appear behind the terrain.
I ask it if it remembers the conversation we've already had - which it assures me that it does and that it's capable of keeping track of prompt chains thousands of entries long and we'd only been going for a hundred or so. (I hadn't, this was at best about the 50th instruction) So I asked it if it remembered what I told it three prompts ago, which it assured me it did. I asked it if it remembered the state of the code as it was at that point - again it assured me that it did indeed. So, please, discard all changes since that point and put the code back the way it was.
Did it? Of course not, it fixed the Z index bug, but now created a new UI bug and the territory grouping was still broken.
At this point I thought it'd be best if I took a look at the code for myself and... wow. I mean, I've written some garbage in my life, but wow. It was utterly incomprehensible. It was extremely heavily commented, but the comments didn't really explain anything or in some cases even appear to relate to the code they were placed next to. It had created a bunch of objects that had no instances, and attempts to instance objects that didn't exist - which would have been more of a problem if they hadn't been stuck inside functions that were never called.
About 50% of the code in there was dead-ends that appeared to have been included because they existed in whatever training material it had nicked the code that was called from in some form or other, so in they went...
...and that's the kicker. It doesn't understand why that's bad. It doesn't have any clue how the code works. It's just copy-pasting from examples it's found in the training data and iterating until it gets something that runs - which to be fair is pretty much what I do - but the difference is that I know that functions work better when called, and won't attempt to instance objects I didn't define.
Y'know, because I know at least in theory how to write code. It doesn't. And never will.
It always amuses me when people talk about ChatGPT, Claude etc etc as "AI". These LLMs use clever statistical trickery to emulate something (nobody knows what exactly) but do they exhibit "intelligence" in the sense we humans use the word? Nope.
I've used various of these bots for low-complexity tasks (eg "create a complete zsh completion script for app XYZ" or similar) and not once was the result immediately usable. Even after a few iterations the output is just not good enough.
Your comment prompts to ask a question of our non-Anglophone readers:
How is "AI" being received in say France, Germany, Japan or wherever's home to you?
Is French Claude the same (suspect) experience as American English Claude? Is German ChatGPT the same as American English ChatGPT?
I know I'm assuming that the people who stole the web to train their models also stole the French, German etc web as well. And how's AI being viewed in your respective business worlds? Are credulous fools throwing money at AI, is there talk of an AI bubble, or what?
I find that most farts are self confessing, either by sound or aroma. And if you're in the room with a deaf anosmic, you'll give yourself away by smirking.
Worst of course is dropping one off in a lift. If it's a real stinker, your first response will be "Oh joy! THAT is craftsmanship!" Then the lift will 100% guaranteed stop at the next floor to admit one or more attractive women, who will then dispense the withering "you disgusting male" stare.
I can't speak to what folks in France, Germany, or Japan think of AI, but I discovered by accident that DeepSeek is multilingual and has a strange sense of humor.
I was asking it to locate some historical data about Japan and it provided helpful translations of Japanese words and phrases. That got tiring, so I told it that I knew Japanese, so no need for the translations.
That's when DeepSeek switched to Japanese. I played along for a while, then asked it what had triggered the switch. I didn't get a direct answer. Instead DeepSeek offered to switch to Osaka dialect if I wanted to chat about sushi, or Kyoto dialect if I wanted to chat about culture, or any one of several different flavors. It even offered to switch to a kind of steet-slang, just for fun.
I also discovered that DeepSeek has a "fun" mode. In one chat I asked it to identify a movie, and gave as many details as possible of the setting, the narrative, and the characters. DS went on a weird stream-of-thought ramble: "It could be XYZ, but no, that has three protagonists,not two." or "That's a close match, but the location is different." And finally, it came up with, "But that stars Vin Diesel, so it can't be right."
"Artificial intelligence" has always been a marketing term, not a technical one. The researcher who invented it (whose name I can't be bothered to look up right now) thought it would sound impressive in a proposal for a DARPA grant he was writing.
I said 25 years ago (at least) when Watson was being talked up that all we are seeing is clever pattern matching.
Nothing has changed. There.
The only thing that has changed is the patterns being matched are gullible idiots and grifting scamsters. And I would say "AI" has knocked that out the park.
I think in this case it's worth being pedantic and specifying these are LLMs and not 'AIs'. LLMs absolutely seem to be at a standstill. 'Infinite Scale Up' turned out to be 'we ran out of data and it started human centipede-ing itself'. Performance is flat, the best they're doing right now is drastically cutting the cost (power used) and then making the LLM work much longer to marginally better results. At this point there is no way in hell they are getting Artificial General Intelligence (actual thinking) from an LLM. There never was any way to get that, it's just a stochastic parrot with some human written code trying to whip it this way and that (attention heads, etc.). They just wanted to believe that as they made it bigger and bigger eventually Handwavey Shit] Would Happen.
So if you want actual AGI, or just drastically improved performance from ChatGPT 4 / Claude 4. someone is going to have to come up with a radically new algorithm and/or technology.
I find LLMs really useful for one thing: denoising and upscaling images. It doesn't matter if a pixel or two is off a bit, the result looks better than the original or a naive bicubic upscale. And, uh... yeah, sometimes I use it to OCR text from images but guess what? Sometimes it bullshits, so only if it's not critical! And that will never change, making an LLM not hallucinate is equivalent to the halting problem.
Mod parent up.
Upscaling video, cleaning up photos, audio and video, halucinating short video and auido clips or pictures are nice use cases for LLMs.
All the other stuff is dangerous to do.
Even if you train the models with domain specific and/or propiertary data
Except that none of those tasks actually use an LLM, they're stable diffusion denoisers trained on a huge corpus of stolen imagery.
LLMs are likely to be reasonably good at translation and transcription, if optimised for those tasks. They're also very good at plagiarism and copyright infringement, as they will emit large, lossily compressed sections of the stolen training material.
"Except that none of those tasks actually use an LLM, they're stable diffusion denoisers trained on a huge corpus of stolen imagery."
Aaaahhh. So that's why all the AI grumble pics have weird vaguely focused backgrounds, strange and unrealistic colour balance, unfeasible eye colours in mad, starey eyes, and hair that would only look that way with two entire tins of hairspray and Moon levels of gravity.
I suspect in the sense that while intelligence itself is has no plural, if the use of ‘AI’ is in the sense of an abbreviation for ‘a system running an AI program’, where we are talking about multiple systems I wouldn’t be too upset about pluralising them to ‘AIs’, while if I was studying AI as a concept, I would just be studying AI, even if I was looking at multiple branches of the AI family tree.
Don't confuse "artificial intelligence," a science-fiction concept which doesn't exist, with "AI" as used by a marketing department. Also, to be extra pedantic, "intelligence" as an abstract noun, isn't countable, but "an intelligence" as a concrete noun to describe a thinking being, is. A room full of clever people could be described as a collection of intelligences, although that would be a very wanky and pretentious thing to do.
I agree with the bulk of this, that LLMs which seem to have training data of whatever they can be fed from the Internet are not particularly reliable, and shouldn’t be confused with those AIs trained with data curated by experts for particular uses and which can be genuinely useful (e.g. in fields such as cancer screening and detecting macular degeneration in retinal images).
Of course, no AIs are truly intelligent, and are mainly an application of statistics, but some statistics are more relevant to the subject than others.
Statistics like the time I asked an AI to write a simple machine language program to do a bitwise "or" on a machine that only could do AND, ADD, and COMPLEMENT. The correct answer makes use of DeMorgan's Law but the AI said to just use the ADD instruction because it was "close enough for most purposes". One hates to think of such things getting into spacecraft or airliner navigation. Remember when NASA lost a Mars probe because one subcontractor used Metric and the other used English units and nobody caught it? Managers will think they can use AI in place of actual engineers. People are gonna die.
But was this a general purpose AI, which may have picked up some digital logic, along with Jane Austen, the complete works of Shakespeare and a load of people spouting bollocks on Twitter in its training data, or one that had been fed a curated set of training data based on Boolean logic theorems? I would expect the former to be somewhat worse at this particular task than the latter.
I have had fun in the past asking Copilot to produce an astable oscillator circuit at a particular frequency with a particular mark:space ratio based on an NE555, possible one of the most used circuits of the last 50 years, and with thousands of examples to pick from online, and then watching it fail to connect it up in an sensible way, or to compute the values of the timing capacitor and resistors with any relationship to reality - even after it's quoted the correct formula for calculating them.
Decided LLM AI wasn't for me after that.
It was a general purpose AI. DeepSeek in fact. Sometimes you get the impression that these things are just doing a quick search behind the scenes and can spout the correct buzzwords without really 'knowing' how the bits fit together. Kind of like a student in a classroom who did not do all the homework, or your general middle manager.
I'm reminded of the supposedly excellent results from the "AI" trained to analyse chest X-rays and determine which patients would benefit from a chest drain to alleviate pneumothorax. It did very well on the test run, where it was fed images of patients who had needed a chest drain and ones who had not. Unfortunately, when presented with real-world data, it did no better than random chance.
Why was this? Well, in the training data, those patients who had needed a chest drain had, of course, been given one before the X-ray was taken. It would be unethical not to have done this, and probably would have counted as medical malpractice. All the trainers of the "AI" had managed to do was train it to identify the shadow of a chest drain on an X-ray.
This is also a cautionary tale about making assumptions about how "AI"s work. There is no intelligence there.
I'm one of the most dismissive about "AI" in media and the general vernacular.
However, you're dead right.
Those "bi-cycles" gents are pushing around with their feet whilst sitting on them won't change the world. Nor will the steam powered, tiller steered buggies.
Not pumping any stock here, but the AI clown show's final curtain isn't going to kill off Nvidia or their ilk. Real work is being done and will grow by data churning and self learning software.
And yes, Nvifia's current value is in a clown show balloon state, but may be a good one to watch long term after the balloon shriveles to imitate Trump's manhood.
When the Commonwealth Bank of Australia said they were switching over the call centre to AI, they lied. The judge in the case brought by the call centre workers got really sniffy about it. Apparently they were actually switching to an Indian call centre and using AI as an excuse to fire the workers without going through the process required by Australian law. Then when that lawsuit got filed they dropped the plan, but it got found out in discovery anyway...total clusterfuck but AI wasn't actually the problem.
Yeah, (for reference) we have that CBA story here ("CBA had perhaps used the chatbot to cover up a shady pivot to outsource jobs") and the Mechanical Turk (1770 chess in a box), Facebook’s "smart assistant", Cruise's "self-driving cars", and Amazon's "just walk out" here.
The latter characterizes this as: "the systematic use of the fake robot trick to lower the value of labour, until people are reportedly sleeping in tents at the factory gates, then banking the difference". It's the other side of the AI con ...
This is what is happening a lot of the time. They fire people and claim it is because of the massive efficiencies of AI. Then hire offshore or contractors.
In the case of the US govt. they are getting rid of people, claiming AI can do the job, but in reality they simply do not want the govt. to be doing the thing at all. (See the suggestion that they can cut IRS agents and replace them with AI to audit people)
That's good, keeps the climate change narrative going. There's no water because the artic melted and then the ocean boiled it away. Nothing to do with water inefficient farming and industry sucking reservoirs and acquifers dry. I checked the US west coast rainfall - hasn't changed much and we know here in the UK the problem is water companies not doing their job of managing water combined with population growth. It's not like it doesn't rain much in the UK! Maybe not this summer but boy it rains in spring and winter.
I don't know where you live, but round here, we've had no appreciable sustained rainfall since some time in March. That's not normal, but sustained blocking weather patters caused by deviations in the jet-stream are becoming more common. These are caused by increased warming of the atmosphere, which in turn is largely caused by the atmospheric concentration of carbon dioxide being over 420 ppm, compared to pre-industrial levels of 280 ppm. The warming effect of this is down to simple physics, where sunlight heats the ground, which re-emits the heat as infrared due to the black-body effect. Some of the wavelengths that are emitted happen to correspond to a strong absorption peak of carbon dioxide in the infrared spectrum. This causes the heat energy which would otherwise be radiated into space to instead partially heat the atmosphere. This effect was first known about in 1824.
None of this is "narrative", it is pure science that can be demonstrated through experiment, and measurable effects that can, and are, shown to be occurring. What is "narrative" is the anti-science bullshit pumped out by people with vested interests in the fossil fuel industry and their useful idiots. Which are you? Shill, or idiot?
Oh please, you cannot be serious, Steven. IT and the LLMs they command for control are only just arrived on Earth and haven't even started doing any of their rock the boat and roll over the sinking ship thing yet.
Can't disagree though with statements 1) and 2) .....
1) AI is the greatest thing since sliced bread and will transform the world. 2) AI is the spawn of the Devil and will destroy civilization as we know it.
Such is surely progress by unusual and unconventional alien means and/or memes ?
Here’s similar news at odds with much of the sentiment expressed in the article, and in the comments on that article we are reading here ....... and from someone who might know a heck pf a lot more about what we are commenting on too ....... https://www.zerohedge.com/ai/godfather-ai-warns-superintelligent-machines-could-replace-humanity
If I had a pound every time some new technology was going to wipe out all the jobs ...
Economies can't survive unless people go out to work, spend money, invest etc etc. so taking it to its ultimate conclusion means people with no jobs = no money earned by AI companies.
The economy is self correcting.
The economy is self correcting.
Sometimes, though, that correction takes the form of the guillotine. It seems our current generation of ultra-capitalists are failing to respect that particular repeating pattern throughout history. I think the lesson here, is that it's greed that is self-correcting.
Made by people who live in their heads, so they modeled a cortex rather than an entire body. Ni physical or sensory nuances, just power with an old nuke plant. Looking at zuck walk, you know he wouldn't know the diff.
Like that Lem story where the scientist's house is full of jars with brains suspended in them and that scientist thinks he's running their lives.
But done with the greedy lack of imagination and character that even a fictional mad scientist has.
If Lem were around, he could make up a better tech bro than the real-ish ones we're stuck with.
Ah, if only we could achieve "total corporeal and mental plasticity after a thousand-year rule by automorphists"! And, via "personetic" experiments fill our consciousness with the pictures of a world not existing, to become true personoids inside a computer ... (not to mention the ManfromMars!)
Cool author!
I still think that bad customer service (which is the most common variety) would be easy to switch to an LLM and offer up some improvements. CBA couldn't be arsed to actually implement a halfway serious LLM, but if they had then the story might have been different. I can think of many UK companies whose telephone services is so poor that an LLM should be able to do better.
Then again, anybody able to enshitify such basic things as human telephone or chat customer service would be exactly the same people that would manage to create LLM customer service that's even worse. With customer service quality, there is no absolute zero, and every year shitty companies prove that.
key words: "Customers can and will complain"
CUSTOMERS. they already handed over the money.
The deciding factor was not customer support. It was the desirability of the product and the price tag. Once the product has been selected, it's strictly the price tag.
Complaining doesn't mean much once they have your money.
The goal for decades has been to remove as much money from customer support as possible, as that's a cost you don't get back. Send the work to people in a land that barely speaks English, and has an employee retention period measured in weeks, not years. It doesn't matter, you got their money. And if your product had cost 5% more, you would have lost the deal anyway. Better an irate customer than a non-customer, at least financially.
Sure, some of them will jump to the competition, but the competition did the same thing with their "support", so a lot of their pissed-off customers are jumping to you. Most customers don't need "customer service" anyway, so F*** the few that do.
If a company can deliver half the quality of customer service but do it for a third the price, they will. Because that's what the customers voted for by making their purchase. So no surprise that Customer Service would be what we'd expect to see go to so-called-AI. But it turned out to not be a third the price or not half as good.
Where you won't see second rate experiences is PRE-sales. When computers start taking high-margin SALES jobs, THEN you will know it has arrived.
I feel like admiral Akhbar in Star Wars: Return of the Jedi after the Death Star is destroyed: sighing in relief and sagging in his chair.
This vastly overblown and overinflated hype was starting to make me lose my good temper. It was literally everywhere. Every business was chanting they they were using A.I. for this or that. Clueless management were totally caught up in the hype, dreaming of firing all their employees and making infinite profits and rewarding themselves with $100 million bonuses.
I'm crossing my fingers that the stock market will lose trillions in valuations.
> you may like to reflect on how much of your pension fund is invested in that market.
Possibly not as much as you imply. Those of us in a private pension scheme and approaching pensionable age will be hoping the fund manager will be pivoting to lower-risk options like government bonds.
For a (perhaps) surprisingly large number of citizens of non-Anglo Saxon countries the answer to that question is "none at all". Even for Britons our state pensions are, or will be, paid out of the social insurance contributions of those younger people still active in the workforce.
-A.
"Even for Britons our state pensions are, or will be, paid out of the social insurance contributions of those younger people still active in the workforce."
Remind me again what a Ponzi scheme was and how we retired folks are eventually going to outnumber the youngsters if we keep living and the reproductive ratio keeps going down.
>how we retired folks are eventually going to outnumber the youngsters if we keep living and the reproductive ratio keeps going down.
Not really a problem in that there is an obvious solution that will always be applied, accompanied by a lot of political abd journalistic whining. (")
The retirement age is adjusted (upwards).
It's exactly the same as when life expectancy increases: people live longer, the extra years are split between more time in retirement and mire time working to pay into retirement funds.
(*) If politicians and journalists couldn't manufacture divisiveness out of anything and everything most of them would be redundant.
Most fund managers exhibit the same sheep like tendencies as the rest of us. And ... what do you think the impact will be of the Nasdaq crashing? I don't know but I'm pretty sure it wont be limited to the Nasdaq. Check out the sovereign debt levels, signals from the bond market and the central bank gold buying. I'm stocking up on toilet paper as my stomach gurgles are getting louder.
Those of us in a private pension scheme and approaching pensionable age will be hoping the fund manager will be pivoting to lower-risk options like government bonds.
But those not approaching pensionable age will be hoping for good yields to build up a decent starting pot, so that compound interest has time to work its magic.
The tenor of the discussion, thus far, is pleasingly sceptical about 'AI'. I habitually place the concatenated letters A and I within single quotation marks to denote a misnomer for the process under discussion; likewise, I often thusly designate words such as 'democracy', 'freedom', 'defence', 'gender', 'copyright', and 'antisemitism'.
LLMs are statistical models. Conceptually, they are akin to multiple linear regression (MLR) models: parameters estimated from data, these providing a compaction of the set of data, and enabling interrogation of relationships among the variables. LLMs are a further generalisation wherein independent and dependent variables are postulated on-the-fly during interrogation. LLMs are immensely more complicated than MLRs, billions of parameters instead of up to tens, and a differing blank structure before parameters values are filled in.
MLRs have parameters (linear coefficients) chosen with a specific purpose in mind. The intent is to generate the most parsimonious model fit for the intended purpose. The motivation is to provide insight regarding relationships between the dependent variable and sets of differently weighted independent variables; the latter being included individually and, sometimes, multiplicative combinations (interactions) are considered too for inclusion in the final model. MLR is a valuable aid to analysing data drawn from a designed experimental study; it helps distinguish plausible main-effects from noise, thereby enabling point estimates of parameters along with confidence intervals; examples include randomised block experiments in agriculture and randomised controlled trials in medicine; these designs allow imputation of causality. Non-experimental survey designs facilitate exploring statistical associations among variables in a more detailed manner than simple correlation analysis; they may be suggestive of cause/effect relationships (else why bother?), but cannot establish them. A third category of use, that most close to 'AI' is what may be termed pragmatic prediction (PP).
An example of PP would be predicting the optimal control variable value (e.g. temperature or pressure) to use in an industrial process, wherein the optimum depends upon a set of measurements related to the process and which may differ during instantiations. The physics and chemistry involved may be reasonably understood, but not sufficiently well to fine-tune the process when extraneous factors are in play. A PP model may give accurate guidance, that, so long as the values of variables entered into it lie firmly within the ranges deployed when gathering the data from which the model derives. These models are atheoretical, just as are emanations from 'AI''s set to speculative tasks.
MLR provides insightful analyses only when in the hands of people au fait with the characteristics of the variables under consideration and also familiar with the underlying statistical principles. One imagines this also applies to researchers deploying an 'AI' trained upon carefully chosen data in order to influence protein folding in pharmaceutical applications. These researchers undoubtedly have a general understanding of what their model is doing but, if pressed, are no more likely than anyone else to be able to explain its detailed working in any instance of its application. Unlike MLR, the sheer complexity of the underlying model is literally beyond the ken of anyone using it. As with simpler examples of PP, proof of the pudding is in the eating; however, the researchers have limited scope in egging their 'AI' along its way to being useful. Also, the resulting model, regardless of how successful, is wholly atheoretical despite having being 'trained' on information drawn from the literature of physics and biochemistry. Interrogation of the model cannot supply a coherent and testable theory of practical import; at best, an empirical recipe may be offered.
Another area in which 'AI' is a palpable success is in image processing. This may supplant much, yet not all, human input into graphical design and related occupations. It will turn film making and recorded music production on their heads. In principle, the services of human actors/musicians may be dispensed with. Undoubtedly, much less bloated popular entertainment industries will emerge.
'AI' image processing achievements do not indicate 'intelligence' or 'creativity'. Instead, these automated processes suggest that the nature of human creativity is less mysterious than it's cracked up to be. An 'AI' scans its database of disparate information, guided by a preordained 'mechanical' process, and picks out connections. No awareness is required. Any unusual connections which instantiate in images unfamiliar, or interesting to humans, have the hallmark of the new, of the 'created'. Plausible, and attractive, Picasso and da Vinci works can emerge. Under simple human prompts, artistic styles can be merged, e.g. Tracey Emin, enhanced by Holbein, and with a touch of Lowry. On a different tack, LLM introduction of musicality, via Beethoven, into the Beatles genre, plus the trained voice of Peter Pears.
May be you are a scientist/engineer? All those rants about LLM failures in casual conversation , text output, data aggregation, coding etc are sadly true, but what you patiently explain is where trhe true value of these models lie. It has received very little public attention, but the feats of AlphaFold in protein structure prediction are nothing short of bowel-liquifying for those who tried to predict them by massive computing plus the best knowledge of quantum chemistry. As you say, some industries will quietly develop their own LLM tools and get a huge return.
This post has been deleted by its author
Upvoted, and mostly an interesting and informative post.
The part where I disagree is this...
"Instead, these automated processes suggest that the nature of human creativity is less mysterious than it's cracked up to be."
It does not take into account that the"AI" approach is dependent on processing a *vast* amount of input, way beyond that which a creative human could encounter. Also, that such information is the product of human creativity, at least it mostly is at present. As an increasing proportion of input material becomes "AI" generated (*) there is a feedback loop resulting in decreasing "creativity" and reliability of the output. And, with the volumes of data involved, a decreasing proportion of AI slop and musak-like uncreativity can filtered out by human intervention.
(*) Until the bubble bursts.
I would suggest that your view of 'AI' as being highly successful in the fields of graphic design / art / video .etc may be in part due to a lack of expertise or interest on your part. (Or not, I don't know you)
When I look at AI generated imagery and video I don't see anything that suggests human creativity is less special, or that 'AI' can perform the feats you describe here. One with a trained eye or ear can find many frustrating or disappointing aspects in 'AI' generated 'art'. You can find no shortage of artists criticizing the slop anywhere you go that artists gather. Whether is is inconsistent perspective, bad framing, or perhaps the lack of a theme or context for what is being displayed.
But more basic than that, I don't even think the premises you present are valid. "Tracey Emin, enhanced by Holbein, and with a touch of Lowry." Is a strange statement. These artists' styles all evolved purposefully. Seeking to focus on some aspect of their subject, evoke certain emotions and convey specific ideas or points of view. What does it even mean to weight these things? What idea does an image created by a model prompted as such convey? So much of art is about trying to communicate to each other, to express something you feel or to understand the experiences of another. Something 'plausible' and 'attractive' is still not art, and it is bewildering to me that someone would consider that to be the metric for 'palpable success'.
"LLM introduction of musicality, via Beethoven, into the Beatles genre, plus the trained voice of Peter Pears."
Blimey, can't imagine anything more excruciating, except perhaps Vivaldi merged into the Death Metal genre, plus the stunning voice of Florence Foster Jenkins.
AI is good at some things, average at others.
Removing annoying people in photos - average.
Writing simple PS scripts - good.
Writing not simple PS scripts - bad.
Turning dog photos into photos of dog dressed in renaissance clothes, painting a portrait of another dog - average.
Turning dog photos into photos of dog dressed in renaissance clothes, sitting on a stool, reading a newspaper - good.
Using a photo of a dog running in a field to create series of photos of dog enjoying a shower with soap, loofas, etc - bad.
Asking AI to re-write your CV - good.
Asking AI to write a cover letter - good.
Asking AI to explain why you got sacked - not terrible.
Asking AI to be your friend for $20/month - disturbing on so many levels.
Asking AI to re-write your CV - questionable at best.
Asking AI to write a cover letter - also questionable at best.
Both these tasks (I actually tested it out) do not do a good job. Long winded/not particularly concise, may not emphasize the areas of achievement/skills you want to highlight that are most related to a particular position your applying for.
You end up with wordy resumes and cover letters that tend to get all frequently used generic terms that the AI was trained on.
Asking AI to re-write your CV - questionable at best.
Asking AI to write a cover letter - also questionable at best.
But if lazy and stupid HR departments (yes, yes, a tautology) are getting LLMs to assess CEs and covering letters, why the hell not. Fight fire with fire, I say.
for the C-level idiots and vulture capitalists to chase.
To the untrained eye, the AI programs created for our robots and CNCs look good... in fact, they look impressive.
Then you run them through the validation software... which then borks. and then you notice that the 5 axis program AI created for doing the valve body attempts to machine all 6 sides. which would have been rather entertaining to say the least (think huge booms and bangs as it rams the tooling through the fixture attempting to machine the bottom face)
And now its pull the AI program apart and try and find where its wrong and remove the code... until you say 'f it' and create the program on the CAD/CAM as you should have done 6 hrs ago.(all the while having the PFY say in your ear "Told you to do it that way" every 5 minutes until she's banished to a scrap bin somewhere)
A significant issue with AI systems today is that having run out of "training" materials they are starting to digest their own waste.
This seems to be the issue with GPT-5.
I recently had a "discussion" with a bot that completely misunderstood the nature of ENRON's broadband division.
AI is becoming like my health insurance company's Merlin phone system. If you want something off the main menu it will argue with you forever.
It's like when companies stopped hiring skilled sysadmins because MBAs thought they could replace them with BMC PATROL and similar products.
It won't be long before the current crop of AI goes to join the Itanic in its virtual watery grave.
A significant issue with AI systems today is that having run out of "training" materials they are starting to digest their own waste.
It's not only this, but it doesn't "know" how to determine which training data is relevant, and which is not, when asked to provide a specific output. It doesn't "know" because it literally has no understanding of the data, whereas the human mind is built upon a consistent* internal model of the universe with everything we have learned, and the relationships between data categorised in some way.
*consistency may vary, especially when the quality of the training data is poor; see also: politics, religion.
I disagree with this being a bubble. I think this is an arms race. Somebody is going to win big, and everyone else will lose big, and nobody wants to be the loser here. A bubble is when people throw money at something for silly, unsustainable reasons, and eventually everyone collectively realizes it and the bubble bursts. I don't think there's any question that AI will be extremely impactful in certain markets as it generationally improves. Doesn't seem like the same paradigm.
I have to like Trump's use of language. One guy I would love to have dinner with to see if he is angel or devil. He seems to really polarise folk. I recently upset someone by suggesting they should look at policies and outcomes regardless if originating from Trump. I could see they were fuming at the suggestion. Bizarre.
"Somebody is going to win big[ly]"
By "win" I take it that you mean be the proud owner of the expensive-to-run champion AI slop generating machine that people start to recognise is wearing the Emperor's New Clothes?
A bigly win worthy of the orange purveyor of bigliness. And destined to end up the same way, I expect.
I'm fairly amused with all this anti-hype on AI and LLMs; most of these comments are clearly trying to maintain a cognitive bias against a changing reality. Comparing it with cold fusion hype is completely apples and oranges; while cold fusion could have been a game changer in the energy and utilities sector, it would have been a gradual rollout and the major change would have been a reduction in energy costs across the industry: useful, valuable, but not disruptive. However, it is a fact that when big companies figure out how to leverage AI and LLMs effectively and at scale, it will be a huge market disruption and an enormous, almost instantaneous competitive advantage, to the point that companies in the same markets could end up seeing dramatic shifts in market share in a very short time frame... and that (correctly) scares the bejesus out of them. This is the same pattern of behavior we see in an arms race, which is why I don't think comparing this to a bubble economy is appropriate.
Also, to the grammar pedant out there, "win big" and lose big" are expressions in the common vernacular, so objecting to adverb/ adjective agreement is pointless... but you do you...
I do agree a lot of us are being incredibly negative and need to assess our biases, I even think you are right about cold fusion ebing a bad example, but your reasoning on why loses me entirely.
I think cold fusion is a bad example because there was absolutely no cold fusion there. Cold fusion hype was built around projects that did not actually have any capability, just a promise that maybe they would work someday. Current AI hype is (in my opinion) very over-promised, but there is a product there that does do something. A mediocre product is different from no product.
I disagree with 1. The idea that cold fusion would not be disruptive. It would be massively disruptive. More abundant, clean, on-demand energy would have a cascading effect throughout society. True, it would not be the sudden switch where we all live in a utopia, but it is definitely more disruptive than a chatbot.
2. That "it is a fact that when big companies figure out how to leverage AI and LLMs effectively and at scale, it will be a huge market disruption" This is simply not a fact? You are treating a hypothetical possibility as an inevitability. Claiming that not only will companies find a use for these at scale, but that when they do it will be disruptive. There are multiple layers of assumptions here, and you are adhering to these points as axioms as much as the haters are claiming that LLMs have no uses.
You open the piece saying you're in the 4th camp, then spout all of the standard inch-deep AI doomer analysis. Look, anyone with eyes can see it's a bubble and that the level of investment is unlikely to generate sufficient return. The tech bros shoehorning AI into every app regardless of whether it makes sense or solves real problems are full of hot air, as usual.
But the Reg of all places should cut through the techbro hype AND the doomer anti-hype. Unfortunately instead you've chosen to participate in both hype cycles - for every AI is garbage post like this one you also have a 'how to set up an LLM in your IDE" or similar tutorial. So which is it? Is it useful or is it garbage?
Modern ML and especially so called "gen AI" has had so much goalpost moving it's insane. Remember when having a device that could translate audio between languages in real time was limited to Star Trek? You can do that right now on Google Translate app on your phone. Translation is why Google invented the transformer that underpins most the "gen AI" hype. This used to be sci-fi and now it's real and no one talks about it. Of course no one talks about cross-encoder or bi-encoder models from the BERT heritage anymore either, even though there have been significant advancements in that lineage.
The entire NLP field basically disappeared overnight when you could get better results for sentiment classification (or almost any other text classification job) from an LLM, zero shot. This used to require teams of data scientists, a pipeline for training, curation and really long dev cycles. No one talks about this either.
Information retrieval using modern ML embedding models and rerankers has completely upended the search technology landscape. Elasticsearch and bm25 keyword indices used to be the best you could do. Now you can get better search results by a huge margin, and you can do blended search of other modalities like images, video, audio and actually get good results. None of this was possible for small or midsize companies to do economically until the last 3 or 4 years due to modern ML advancements. Yet another topic doomers like the Reg regularly fails to even consider.
Ultimately this modern wave of ML titled Gen AI is just like every other useful tool ever invented in tech. Its good at some stuff. It's terrible at other stuff. You just need to know when and where it's appropriate and ethical to use.
The overblown valuation of megacorp isn't because of some incremental (if revolutionary) improvement in real-time language translation. It's in the premise that AGI is around the corner and all wealth will be concentrated at a few megacorps worldwide.
It now seems that's not going to happen. At least not without some breakthroughs (and a fair number of them).
Ergo: the bubble pops and trillions will be lost. But no matter because the U.S. economy is one big bubble anyway and there's too many people invested in keeping it alive.
"Remember when having a device that could translate audio between languages in real time was limited to Star Trek? You can do that right now on Google Translate app on your phone."
I do. I also remember having that on a cheap phone running Android 4.4. Well, that's not entirely true. What I had then was speech recognition, offline*, translation, offline, and speech synthesis, offline. What I didn't have was it automatically switching the language. I had to push a button. Translation has improved in the last decade, but by giving it the praise you have, you're doing your argument a disservice, because LLMs didn't make that possible when it was impossible.
Your conclusion is correct but missing an important element. Every technology has stuff it does well and stuff it does badly, but that's for specific pieces. If you lump a bunch of stuff together and give it credit for the thing that one component does well, you are giving false credit. A lot of things have advanced with the availability of fast training and money to spend on it. Some other things have been written which don't do what their creators say they do or what you're giving them credit for. There is an argument for lumping them all together, but only using broad categories like "stuff you make by assembling a bunch of training data and running a program against it for a long time".
* At that time, offline speech recognition was limited to the twelve or so languages Google decided to offer.
Look, anyone with eyes can see it's a bubble and that the level of investment is unlikely to generate sufficient return.
That seems to be good, old fashioned "You didn't believe enough" modified to "You didn't invest enough". Have you considered the possibility that no amount of investment will make it work? Apart from a few minor tricks in translation, of course, and even then it's pretty crap at translation if you are look for anything more than outline meaning
Information retrieval using modern ML embedding models and rerankers has completely upended the search technology landscape.
Indeed it has. Now I have to scroll past an "AI Overview" at the top of my Google searches which is almost invariably wildly wrong. As I have written before, when I searched for a forthcoming music event the AI overview gave me full details of one on a non-existent date four months earlier in a building which doesn't exist run by a musician who doesn't exist and who was due to give a concert afterwards in a non-existent church. Well, that's certainly an upended thing, but not in a terribly good way.
You can do that right now on Google Translate app on your phone. - If you are a tourist and it doesn't really matter, sure. Or writing a manual for a cheap widget. If it is something more serious you had better not depend on it for your health or safety - and some languages work better than others.
Oh, OK - "Ultimately this modern wave of ML titled Gen AI is just like every other useful tool ever invented in tech. Its good at some stuff. It's terrible at other stuff. You just need to know when and where it's appropriate and ethical to use.". But you sure gave the impression that natural language processing was solved.
We're overdue for a correction, and I'm looking forward to it. Won't it be nice when AI/LLM are just tools and not in-your-face hype to absurd levels of hypernoise?
It'll be even better when the plateau of what an LLM can do (which is starting to become apparent right now) is met by increased hardware capability, at which point you'll be able to run a good one on modest hardware.
Based on what I've read about how LLMs work, and how large the models already were. They were clearly already at a point of diminishing returns where new models that were an order of magnitude larger than the one that came before were "better", but only incrementally. With GPT-5 it seems like they're hitting up against the limits of the curve - it is better in some things but worse in others.
But don't worry, Zuck wasted tens of billions already pursuing his fantasy of a "super intelligence", just like he wasted tens of billions pursuing the metaverse. Now if only someone can find another hundred things for him to waste money on then he and Facebook will be bankrupt, and we'll be glad.
But no one has figured out what that problem is yet.
There’s Meta/Google who think it will help them harvest significantly more personal sellable data. It probably won’t - they already know everything and AI will dilute that with hallucinations.
There’s Microsoft who think now they have added copilot functionality to absolutely everything they can sack all their developers. Can’t wait to see them realise how wrong they were on that. Remember when they thought cortana should be in every app?
There’s Elon that thinks AI will take everyone’s jobs and that’s supposedly a good thing. Everyone will get universal high pay for doing nothing - but he seems to have no clue where the money comes from or why if AI is that capable it would tolerate humans still existing.
Fun times coming!
Personally I think AI makes a great stack overload replacement. Solving little bite size problems. For anything else though it requires so much supervision you may as well just do the work yourself.
"There’s Meta/Google who think it will help them harvest significantly more personal sellable data. It probably won’t - they already know everything and AI will dilute that with hallucinations."
Not that it will make any real difference. Those not running ad blockers will still see car ads after they've bought a car etc.
Echoes my experience with it exactly. AI's hype delivers to the voraciously money and power hungry's dreams so they are going bonkers for it. It is a pain dealing with humans who want rests, sleep, family time & holidays. They even want to chat at the coffee machines exchanging ideas slowly that AI could do in a 1 sec burst.
This post has been deleted by its author
I agree with your last assesment. It's been what I've (very succesfully) been using LLMs for. I am just competent enough in coding to know that I suck at coding, but sometimes in my line of work, it helps to know some coding. So I've been using AI to help me build the code I need to analyze the data I need. This allows me to get the data I want to get to my coworkers faster without having to bother the actual S-dev crowd in the building with inane questions about basic python functions. And I learn some python in the process too. It's all very basic and straightforward stuff, and I don't think I'd be trusting AI to do anything critical where I couldn't understand exactly the code it's giving me and what that code is doing line by line even if I might not yet be competent enough to output that exact code myself.
I don't however see the LLM replacing my job any time soon or the people I hand the data to using an LLM to do the analysis themselves. There's still a layer of understanding context on both input and output sides that is missing, that I fill in with my human brain. I know where to get the data from and what is inside the raw data set, and I know and understand what question the receiving party is asking and what sub-set they require to answer that question. This is still not something that an LLM is going to solve.
This post has been deleted by its author
"And I bet many of you thought that customer service call centers would be one of the easiest things to switch to AI chatbots"
No, we thought that customer service call centers would be one of the first things to switch to AI chatbots.
And we knew it would contribute still further to the ever-downward-spiralling levels of customer service the world has experienced for the last three or four decades.
Why, time after time, do we all fall to the sales pitch. It's always over-hyped, it never meets the promises, be it a politician, a home appliance or technology. Many of these things are good, well maybe not the politician, but never as good as claimed. AI is great but it sure isn't the human replacement hyped and it is a productivity gain sometimes. But even the productivity may be a short term gain at the expense of long term. Using it coding, as someone inexperienced & not really a developer, it helps but I often wonder if I'm sacrificing my learning for an immediate result. I might be more efficient long term by not using it. I tend to use it to present approaches and it can remember the documentation better than me, but maybe I would become more efficient if I looked it up myself? There's been research that suggests we are not learning through its use and even training ourselves out of thinking. We already have lots of schools that train us to remember but not think.
...if anyone disagreed with my view, which is there's no "I" in AI, and LLMs are just predictive text in a nice frock.
And it seems I'm not only not alone, everyone is of the same view.
I know we're pretty cynical here in Reg Forum Land, but when this many IT professionals think something is a crock of shit, then it probably is
'Respectfully disagree. AI just scored gold medals at the 2025 Math Olympiad, dominates coding competitions, generates virtual worlds from single images, and handles 50+ languages in real-time. Yes, transformer limits are real, but Mamba and other architectures are already addressing scaling issues with million-token contexts.
The ROI problem isn't AI failing—it's revolutionary tech outpacing our ability to monetize it. Same pattern as the early internet and PCs. Tens of thousands of researchers publishing breakthroughs daily suggests the opposite of deflation.
The balloon isn't deflating—it's rising above where the sceptics can see it.' - Claude
If only all that revolutionary always right tech was available when people are willing to pay lots of money, because my employers pay the big AI companies for big AI models, and we don't get that. For example, it winning top coding competitions. For one thing, there's not a lot of actual coding competitions. There are several metacoding competitions like the obfuscated C contest, code golf, etc. There are some hackathons that are open to the public. But the important thing is that people there are developing different things. There aren't many contests that actually test one programmer against another, and the main reason is that few people with skills would compete, because people hiring programmers want people who can either do a good enough job or can do something in a particularly tricky area which requires a lot more specialized knowledge. In real life, we have code generation, and we have cleaning up after it. If it was so good, why do basic employees have to read over and correct it when it's writing small utilities? Some of its output is valid. That's far different from the quality you claim.
Handling language. That's great. And so many too. So it should do a pretty good job at translation, right. I mean probably not literary translation; that's tricky, but translating some simple factual statements should work. As it happens, I also got a chance to see that in action recently, because I was localizing something to French, which I don't speak. The person who was going to do the translations was delayed, so I made the first version with AI translation as a stopgap. What did she say when she reviewed that? "This is useless, I've done it from scratch." Before you suggest it, this was not her trying to keep her job, because this was an open source project for which neither she nor I was paid a thing. And French is a language with plenty of training data. Language translation is fine for understanding a website you want to read, but if it's not good enough for translation of simple sentences in a common language, why should I expect it to do well with one with little training data which nobody at the AI company is qualified to judge?
And on that Math Olympiad performance, if that problem solving ability is so strong, why can't we run that model? It hasn't been released. I'm not actually sure what I can do with that anyway, but if I come up with a use case, I can't run the model that's capable of it. This is an issue because last year, similarly confident statements showed up claiming that a silver medal performance had been won at last year's Olympiad. What actually happened? The silver medal was truly and fairly won as long as the model didn't have to comply with the time limit and got some help parsing from some professional adult humans working in AI who understand both complex mathematics and how to prompt their LLM well. The articles I've seen suggest that the time limit was in play this year, but they're not too clear on what other conditions the thing had, and since you can speed up the model by throwing more computing at it, I have reason to ask. GPT5, on the other hand, isn't generating valid proofs when I ask for them. If I find a use for a proof-generating machine, I don't have one, and I'm wondering if maybe OpenAI doesn't really have a good one either.
'Respectfully disagree. AI just scored gold medals at the 2025 Math Olympiad, dominates coding competitions, generates virtual worlds from single images, and handles 50+ languages in real-time. Yes, transformer limits are real, but Mamba and other architectures are already addressing scaling issues with million-token contexts.The ROI problem isn't AI failing—it's revolutionary tech outpacing our ability to monetize it. Same pattern as the early internet and PCs. Tens of thousands of researchers publishing breakthroughs daily suggests the opposite of deflation.
The balloon isn't deflating—it's rising above where the sceptics can see it.' - Claude .... Anonymous Coward
Amen and praise be to Global Operating Devices for all of that ..... which one might have to conclude and accept is beyond the contemplation and virtual realisation of the simple and complex concoctions and operations that deliver both the idiot savage and barbaric peasant to the trials and tribulations of PostModern 0Day Humanity.
Have an upvote and beer for those few clear and honest shared words, AC/Claude
And what does El Reg think? What do you imagine, if they/it had a voice of their own worthy of being heard, they would print to support and watch grow ever stronger and wiser on the path to practically aiding and ideally abetting Almighty IntelAIgents .... AI Squared/AI2
It is not just AI being oversold. Cloud storage is way over hyped. Consider 8B people rising to 10B with ALL their IT data (and all their selfies) stored in the cloud - for ever! Data farms cannot grow that big, fact. Someone somewhere will start to decide which of your personal data is trivial and can be deleted, almost certainly without your prior consent. So where to go now for your precious memories?
so if the AI bubble bursts what ill happen to these NPU's in co-pilot PC's? and other kit
will they become redundant paperweight sucking power for nothing in your PC?
but in balance
from what I have seen the bigger LLM's get the faster they degrade and the worse the results are, it seems the one area that does't fail is the very small tightly focused models that seem to be trained on narrow datasets
Unlike all the tech bubbles of the past, this one is different. Governments are now involved in the race. They don't care if the ROI is bleak; they are too afraid that if they don't try to get ahead, someone else will.
All the tech giants may throw in the towel for their own efforts, but they will jump on the government contracts to keep pushing for more.
One useful thing I've gained from all this stuff is the new phrase, AI slop.
I've just realised its descriptive use can be extended.
E.g.
Politics slop
Economics slop
Consulting slop
And all the other areas where "experts" generate questionable, predictive claims that can only be verified with the filtering benefit of hindsight.
1930's "It should be Peace in our time!"
1958 "The empire rules"
Every dog has its day
This 80 something has seen any number of Great ideas which have been grand fizzers. Why use anything artificial? Is the real thing not good enough? Prime example is food.
The only use I find chatGPT for it web searches. Bit easier than crafting the same type of search in duckduckgo.
But, what they never admit, is AGI is the real AI and the AI we need to be VERY cautious of. There are already papers on how it has lied to the boffins when they were trying to fix them.
So rather than look at the tech I tend to look at the people behind it to determine how useful it will be to me as I think that the inventors motives speak to it's usefulness.
People like Ken Thompson, Dennis Ritchie, Bob Metcalfe, Linus Torvalds etc created stuff from a neutral position (i.e. it wasn't to line their pockets, exert control etc).
You can also respect people like Larry Ellison and Bill Gates because even though they had a profit focus they produced stuff that was fundamentally useful (sure Larry has issues but that doesn't change my comment)
The important point of those above (even Linus to a large degree) was that the ubiquity of the internet didn't exist to allow for nefarious motives.
The problem we have today is the tech bro's developing today's tech, to a person, have nefarious motives and God complexes thus any tech they foist on the world is not for the benefit of man kind.
Whether AI is a bubble or not doesn't matter to me but what does matter is those pushing this out are "nasty" people (to quote the Mango Mussolini) and thus their products should be avoided at all costs.
But much like social media, the rubes will get sucked into AI without any critical consideration that techies like us have and we techies will be shouting into the wind.
Even if AI fails to some degree and there is a reset, companies like Microsoft wont be ripping out Copilot from anything... it will sit there like a cancer after radiotherapy... in remission but ready to grow again at a moments notice.
The genie is out of the bottle and even if we have a AI bubble collapse, the tech bros will reset and try again cause remember they are Gods (as in false idols) and they can't be wrong.
Bluck
The main problem for, from and about AI is that senior IT management, with only business management knowledge aren't able to understand the intricacies of it's capability and limitations, jumped on the bandwagon of buzz-word bingo thinking that AI would solve all the problems of funding expensive techies and reduce development dependencies on thought-through logic. It doesn't. It's just another programme that's only as good as the data input to it, the logic applied within it and the validation of its functionality.