back to article The air is hissing out of the overinflated AI balloon

There tend to be three AI camps. 1) AI is the greatest thing since sliced bread and will transform the world. 2) AI is the spawn of the Devil and will destroy civilization as we know it. And 3) "Write an A-Level paper on the themes in Shakespeare's Romeo and Juliet." I propose a fourth: AI is now as good as it's going to get, …

  1. original_rwg
    Joke

    Looks like you're betting the farm on A.I.

    Would you like me to run you a bath?

    1. b0llchit Silver badge
      Coat

      It looks like you filled the swimming pool and put in big bubbles.

      Would you like me to pop some bubbles?

    2. Fruit and Nutcase Silver badge
      Coat

      Looks like you're betting the farm on A.I.

      ChatGPT-5 was like having "access to a PhD-level expert in your pocket."

      Is that ChatGPT-5 in your pocket, or are you just happy to see me

      As the CEO asked the CTO

      1. Anonymous Coward
        Anonymous Coward

        Alas the PhD is in Gaslighting For Dummies.

        1. Timop

          Just check retractionwatch and spot people who were labelled as exceptional talents and got a lot of funding and nice careers.

          And then remove the possibility of retractions and the whole peer review process and let a chatbot calculate probabilities from undeclared (wonder why..) datasets and just step up the coercion until people just consider everything that is being calculated valid. What could go wrong?

      2. Sp1z

        Or as the HR exec asked the CEO, if recent events are anything to go by

  2. Tron Silver badge

    AI was 3% better than the Metaverse and NFTs.

    AI has limited value as a user interface for search queries, for those who think that boolean is a South Pacific language.

    It may have some use in niche circumstances. This will not be especially lucrative.

    It is not reliable enough to be built into operating systems or software and should only be an option. That is, there should always be an 'off' button, or better yet, an 'on' button, so it is off by default.

    AI has no investment value, as the ROI is negligible and the data centre costs are vast. Essentially, it is a luxury gimmick.

    It is also a major security issue and should not be used on any secure, enterprise, government or military system. It is inherently unreliable, it may phone home and requests to it can be mined.

    No sane person would invest serious money in AI.

    If you do want to invest money, invest it in your security - no public internet access to your intranet, two systems per desk, lightweight/ephemeral data only on any internet linked system, reduce the amount of data you hold, store it on paper if you can. Use simpler software, which will be more reliable.

    As for the AI rollercoaster, it was amusing. Time to move on to the next big thing, hopefully distributed systems, including distributed social media.

    1. xyz Silver badge

      Re: AI was 3% better than the Metaverse and NFTs.

      best not mention the above to Microsoft.

      1. Boris Dyne

        Re: AI was 3% better than the Metaverse and NFTs.

        The Company that bought Nokia, installed their crap software on the phones and reduced the largest mobile phone company on the planet at that time to a heap of e-waste. Even if someone has told them, it won't make any difference.

        1. imanidiot Silver badge

          Re: AI was 3% better than the Metaverse and NFTs.

          Nokia was doomed anyway. They missed the boat on moving to smartphones and their pivot to using MS Windows for Phones was a last ditch effort to remain relevant. As far as phones went at the time they weren't actually all that terrible, but Microsoft repeatedly and chronically kept fumbling the ball on their phone OS which doomed Nokia.

    2. Anonymous Coward
      Anonymous Coward

      Re: AI was 3% better than the Metaverse and NFTs.

      The sane people know how to ride the bubble, and get out well before it collapses.

    3. Anonymous Coward
      Anonymous Coward

      Re: AI was 3% better than the Metaverse and NFTs.

      "No sane person would invest serious money in AI."

      Not sure about that. Maybe not so much money and I think maybe differently. But you are looking at it, in a user context, as someone with a high IQ. Most of the population, for a number of disgusting reasons, not of their making, just want to be told what to do. AI or I prefer LLMs are a great development but they are not a panacea. Also, the crazy investment rush is a big gamble by the oligarchs because if someone really pulled it off they would rule the world. Well until their government sent people with guns to take it over so they could rule the world.

    4. Dr Dan Holdsworth
      Boffin

      Re: AI was 3% better than the Metaverse and NFTs.

      General AI as being built now seems just to be a pattern-spotting system optimised towards human languages. To my mind, optimising towards languages is a very silly way to go other than for demonstration and proof of concept purposes, because we already have very good human language systems out there now. These are known as human beings.

      Pattern-spotting on other things by contrast is quite a good idea, but once again this is a specialised tool of limited real-world use.

      1. Strahd Ivarius Silver badge
        Devil

        Re: AI was 3% better than the Metaverse and NFTs.

        If it is indeed optimized towards human languages, that would explain the abysmal quality of the 30% AI generated code at Microsoft...

    5. MrAptronym

      Re: AI was 3% better than the Metaverse and NFTs.

      I think a caveat here is that similar to the way the 'metaverse' tried unsuccessfully to claim games such as Roblox and Fortnite under its banner, the term 'AI' successfully encompassed all of the various machine learning tools that we have had for a long time now and are very useful. When these specialist machine learning systems, image recognition, large language models and reverse diffusion image generators are all being marketed as one single thing, they have been able to ride on the success of some of these systems to pay the capex on others.

      I also think LLMs will have a bit more use than you suggest: I suspect a few legitimate uses exist somewhere, but there are also plenty of places where quality doesn't matter. For instance they are great at quickly generating tons of low quality but structurally solid text. There is certainly a market there. I would argue it isn't making anything better for humanity, but there are those willing to pay small amounts to do it and you probably don't need to train the most effective models to support spam. Image and video generation also only needs to be good enough to flood social media. Maybe that has long-term negative effects on those networks (financial effects, other negative effects are a given.) but I think a market persists there for now. I think these markets will persist to some extent.

      I know a depressing number of people who use and trust AI when they shouldn't, and they have jobs and lives where they are sufficiently padded from the consequences of their mistakes and misunderstandings that they will simply continue to do so. I am sure that remains a market. Some portion of people will sacrifice almost anything for convenience, Hopefully that market doesn't increase until the people and systems they unknowingly rely on collapse.

      Of course the insane valuations and spending we have seen have been based on the idea that 'AI' was going to replace people. That companies would be able to lay off a major percentage of their workforce, and finally downgrade others in white collar sectors to 'unskilled' labor. That is the only justification for the spending we have seen, and I think people are increasingly waking up to the fact that these technologies we have today simply do not do this. So the bubble will burst and the dust will settle and we will see what remains.

      1. Alan Brown Silver badge

        Re: AI was 3% better than the Metaverse and NFTs.

        "and finally downgrade others in white collar sectors to 'unskilled' labor"

        When was the last time you saw a room full of ledger clerks balancing the books?

        That downgrading and replacing has been happening for a while. AI is just the latest step along the way.

        The irony is that for the most part it's going to replace middle management, not the coalface

    6. Alan Brown Silver badge

      Re: AI was 3% better than the Metaverse and NFTs.

      "including distributed social media"

      Only if we can have an option to forcefeed hemlock to marketers

      recalling the bad old days of Sanford Wallace hijacking any and all mailservers, or Canker and Seagull's green card tsunamis

  3. NewModelArmy Silver badge

    Does This Mean...

    That the cost of graphics cards will come down ?

    (or any other AI laden poop).

    1. Snowy Silver badge

      Re: Does This Mean...

      No nothing will bring down the price, if the need for AI cards goes down they will just look to make the same money selling Graphics cards.

      1. Like a badger Silver badge

        Re: Does This Mean...

        Except gamers mostly can't pay the top dollar that GPU makers would like. When the AI bubble bursts, those GPU makers will be struggling to sell anything other than gaming cards, so they'll have to forgo their 60% margins and sell at rates that the gaming market will accept.

        As the fizz went out of crypto-mining, along came AI, and that was just gravy for the makers, they thought they'd died and woken up in heaven. But the storm clouds are gathering, and where is ANY high volume business use case for new GPUs beyond the current AI bubble? Even if there is one, it can almost certainly use the billions of high end processors already bought, paid for and installed, so there's a big question as to what the vale of the market for new GPUs will be in say 2028?

        Wouldn't want my pension invested in big tech in general, and GPU suppliers in particular.

        1. Anonymous Coward
          Anonymous Coward

          Re: Does This Mean...

          "Wouldn't want my pension invested in big tech in general, and GPU suppliers in particular."

          I'll be changing one of my funds this week. Has done well but I think things are going to pop before the new year. Of course the bastard banks may get it all anyway. Apparently, they are the first creditor to most finance businesses and get first call on money in a collapse. We do not own the shares, we do not own our pensions, we are simply beneficiaries. Scary when you think. So in a collpase the big banks sweep up the assets. No doubt that would include our homes. Just wondering if there is a collapse, what happens to the big guys; Blackrock, Vanguard, State Street etc. Do they go to the banks, although I suspect the banks may control them via obsfuscated routes anyway. There's some weird ownership thing going on among those guys and I'm convinced their priority is ownership and control of markets over and above fund beneficiary interest.

          So when a financial disaster happens, the big banks absorb smaller ones, take assets and emerge controlling more of the world. At least that was the last big depression. We need to find a way to stop this, it's not in our interest.

        2. cookiecutter Silver badge

          Re: Does This Mean...

          i'm hoping gamers remember how badly they've been fucked over by NVIDiA, firstly with bitcoin then AI. even rejigging quality levels of cards to scab more money. myself i'm going to stick to anyone but NVIDIA on my next rig

          1. GNU Enjoyer
            Stop

            Re: Does This Mean...

            >firstly with bitcoin then AI

            It hasn't been possible to mine bitcoin with GPUs for many many years now (ASIC hash rates are just that exponentially higher) - the previous big demand for GPUs was for mining other cryptocurrencies.

            GPU prices were indeed originally much lower, even for the highest end models - but more than a decade ago, nvidia learned that the suckers still pay, price increase after price increase.

            You would hope that AMD or Intel would compete with nvidia by offering GPUs that work with free software (or at least aren't digitally handcuffed, allowing for a free driver to be written) - but no, those run more proprietary software and are digitally handcuffed and the price isn't much lower either.

        3. Dr Dan Holdsworth
          Boffin

          Re: Does This Mean...

          The money will be in MESH networking and swarm robotics, with limited and specialised AI that is useful for military purposes. Where the Ukraine has pioneered a path others will follow and sooner or later we're going to see someone building a production line for small, general-purpose attack drones designed to clobber armour or devastate groups of people, depending on how the explosive system is triggered.

          The only real question then is which minor state gets the overrun treatment and can the actors involved hang onto it long enough to earn a profit back out of the venture?

    2. Anonymous Coward
      Anonymous Coward

      Re: Does This Mean...

      Yep. Bit like war. Disaster for (almost) everyone at the time but some good developments emerge.

      1. Anonymous Coward
        Anonymous Coward

        Re: Does This Mean...

        "good developments emerge"

        But only for a few....

    3. LybsterRoy Silver badge

      Re: Does This Mean...

      More importantly does it mean we will be able to buy a PC without copilot?

      1. Jonathan Richards 1 Silver badge

        Re: Does This Mean...

        >buy a PC without copilot

        Or even a mouse! I have a Logitech item that I like a lot, but it has developed an irritating habit of double-clicking when given a single button press, so I went looking for a replacement. At least two of the devices that I identified as candidates have "AI features", as in e.g. users can assign AI shortcuts to the mouse, such as launching Copilot or ChatGPT, summarizing selected text, generating code snippets, or autofilling templated emails. Dammit, I just want something that single-clicks on a single click!11!

        1. Strahd Ivarius Silver badge
          Trollface

          Re: Does This Mean...

          Use AI-powered voice control!

          Mouses are so XXth century...

          1. CountCadaver Silver badge

            Re: Does This Mean...

            Hello computer hello

            "Why don't you just use the keyboard and mouse?"

            "Oh how quaint" *cracking knuckles*

            </Star trek 4>

  4. zimzam Silver badge

    Dot Dumb

    I'm not sure I see such direct comparisons to the dot com bubble. Sure the big players are massively overvalued, but hardly anyone else has poured serious money into it. Most have just rebadged old tech with an AI label. So the bubble will pop, but it'll likely be limited to those few companies.

    Even then, the ones who have invested in infrastructure like Google and Meta will probably take a bath, but the likes of Nvidia who are cash rich and all their other market segments are still hugely profitable, they'll probably be just fine. Not close to the most valuable company anymore, but otherwise just chug along like they were before.

    So I don't think there's much reason to expect a recession from this.

    1. Doctor Syntax Silver badge

      Re: Dot Dumb

      They say that those who made money in a gold rush were those who sold shovels. NVDIA is in the shovel business.

      1. Ace2 Silver badge

        Re: Dot Dumb

        Of course Nvidia will still be rich. They’ve made buckets of cash, and they aren’t stupid.

        What will change is that their *future* profitability will drop off a cliff. No way they continue making $10B/qtr selling video cards and Mellanox NICs.

        It will still be a business - but a much smaller one.

        1. Doctor Syntax Silver badge

          Re: Dot Dumb

          In a sensible world NVIDIA would distribute that cash to the investors because they're not going to make sensible use of it and go back to being that smaller business and the investors would be pleased with their windfall and realise it was a one-off. In the real world they'll scream and shout at the management.

    2. vtcodger Silver badge

      Re: Dot Dumb

      So you think the damage will be contained to AI bubble stocks? Perhaps, but that's famously what Federal Reserve chairman thought with regard to subprime mortgages in 2007. Some of us out here are skeptical that big time market overpricing is confined to a few AI bubble stocks. Note that the s&p 500 Price Earnings ratio is hovering at about twice its long term average of 15.

      1. zimzam Silver badge

        Re: Dot Dumb

        They're not really comparable. Everyone was investing heavily in CDOs and SIVs back then, even pension plans. And the P/E is high because of those few companies. The rest of the companies in the index are at fairly normal levels.

        1. zappahey

          Re: Dot Dumb

          CDO and SIV? I think you have your stock market crashes confused.

          1. zimzam Silver badge

            Re: Dot Dumb

            No I don't. CDOs and SIVs were the instruments investment banks used to sell off subprime mortgages. By getting the ratings agencies to treat them like they were as secure as treasury bonds they conned most of the market into buying them, including pension funds. Then those were bet against by the same investment banks through credit default swaps.

            1. zappahey

              Re: Dot Dumb

              That was 2008, not the Dot Com bubble, which crashed in 2000

              1. doublelayer Silver badge

                Re: Dot Dumb

                And if you reread the thread, you'll see that's what they were responding to: "So you think the damage will be contained to AI bubble stocks? Perhaps, but that's famously what Federal Reserve chairman thought with regard to subprime mortgages in 2007."

              2. zimzam Silver badge
                Meh

                Re: Dot Dumb

                Read the whole thread before replying, please.

    3. StrangerHereMyself Silver badge

      Re: Dot Dumb

      This is because investors believe that mega-corporations cannot fail anymore and will essentially become eternal. The trick is to simply hold on to your megacorp stock and sell at the right moment when the bubble pops.

      Some stupid noob investor will be left holding the bag.

      1. katrinab Silver badge
        Alert

        Re: Dot Dumb

        You are never going to be able to time this right, other than through sheer luck.

        1. StrangerHereMyself Silver badge

          Re: Dot Dumb

          Megacorp will always have a baseline profitability because they're a MONOPOLY. When some hype occurs their valuation goes up several fold. The bubble deflates slow enough for you to cash out and still remain invested in megacorp.

          1. katrinab Silver badge
            Alert

            Re: Dot Dumb

            No it does not. By the time you know the crash is happening, it is too late.

            1. Falmari
              Happy

              Re: Dot Dumb

              @katrinab "You are never going to be able to time this right, other than through sheer luck."

              Of course you can, just ask your AI. ChatGPT-5 will be able to tell you the optimal time to sell. Trust me with ChatGPT-5 in your pocket you will make out like bandits.

              1. Anonymous Coward
                Anonymous Coward

                Re: Dot Dumb

                Guffaw

            2. StrangerHereMyself Silver badge

              Re: Dot Dumb

              There's not going to be a crash. You know you need to sell your megacorp shares now do you not? Or are you one of those noob investors left holding the bag?

              1. David Hicklin Silver badge

                Re: Dot Dumb

                There is however going to be a lot of spare data centre capacity not to mention all the hardware that they have installed in anticipation of the Big Payday.

                Less demand for DC's and the servers that go in them will hurt the hardware manufacturers, The biggest fallout will be confidence and that could cause a stock market crash. These 2 together will certainly cause a slowdown if not a recession in the IT world

                Google, Microsoft, Nvidia etc are probably big and cash rich enough to ride out the storm. Venture funds will have to take a haircut (they are used to having to do that). Metaverse/Farcebook? They could be in trouble.

                What will be left? some will survive, mostly those that were put together for a specific application and trained on specific, sanitised data. Oh and the cockroaches.

                Personally it can't come quick enough for me, the sooner it happens the less pain there will be for all of us.

                There, that is my prediction, do I get a prize if I am right??

              2. sedregj
                Gimp

                Re: Dot Dumb

                ... mmm green crayons

            3. Helcat Silver badge

              Re: Dot Dumb

              Or... you sell just before you >think< the bubble will burst, which then triggers a panic and people selling like crazy... and that causes the bubble to burst?

              And then, if you're feeling confident, you buy up all the (now) super cheap shares, and hope they recover so you can make another killing by doing the exact same thing... and if this cycle keeps repeating, people will think you're predicting the market and follow your trends without understanding YOU are the reason for the trend.

              AKA a self-fulfilling prophesy.

        2. This post has been deleted by its author

        3. Michael Hoffmann Silver badge

          Re: Dot Dumb

          *You* - and I - are indeed never timing it right. But those with the inside information will. Leaving you - and I, I mean me - holding the bag, if we were stupid enough to jump on the bandwagon.

          1. katrinab Silver badge
            Megaphone

            Re: Dot Dumb

            No. Even with inside information, you are not going to time it right.

            Because, as soon as the insiders "know", it is too late.

        4. Anonymous Coward
          Anonymous Coward

          Re: Dot Dumb

          Correct, I pulled out expecting the crash, came back in, now pulling out again! If it's your pension most companies wont let you change instantly so you will always lag the market by a few days. So, let's say there's a big shock you aren't going to get out even halfway along the down slope. In fact they will probably have halted trading before your sell order is due to be executed. You couldn't even time it exactly on shares because there is a time lag. This is why the trading pro companies buy datacenters near the exchanges so their algorithms get a couple of millisecond advantage.

      2. Anonymous Coward
        Anonymous Coward

        Re: Dot Dumb

        If you want to retire or have to retire it's a big problem. If you're young you can hold and hope or correct your course.

        1. Anonymous Coward
          Anonymous Coward

          Re: Dot Dumb

          .... or lose even more?

          It's dangerous to assume that stock value will always go up, given enough time.

      3. Alan Brown Silver badge

        Re: Dot Dumb

        History tends to show that megacorps (especially financial/investment ones) can literally evaporate overnight

        If your product is an illusion (money) then your fortunes are too. There are a lot more pyramids out there than the ones at Giza

    4. katrinab Silver badge

      Re: Dot Dumb

      Cisco was fine as a company after the dot com crash, in that it was profitable and continued to be profitable. However, it peaked at $82 on 27th March 2000, and still hasn't recovered to that level even today. $82 was pricing in growth that was never going to happen.

      I think Nvidia will end up looking like that.

    5. Timop

      Re: Dot Dumb

      Imagine venture capital firms pouring billions to AI related stuff and reality hits.

      They will definitely find ways to get their money back. Preferably from taxpayers like the financial institutions usually do. Unless it is funded mostly by regular people. Then they'll just need to deal with the pain.

    6. Anonymous Coward
      Anonymous Coward

      Re: Dot Dumb

      So the bubble will pop, but it'll likely be limited to those few companies.

      Those companies are half the Nasdaq. The ripples will be more of a Tsunami and we are overloaded with debt. The tech market is probably highly leveraged and the margin calls will spread like a shock wave from a nuke. Probably! Anyone analysed the impact of the Nasdaq falling instantly by 50%?

    7. Elongated Muskrat Silver badge

      Re: Dot Dumb

      There are several players who have obviously thrown a lot of money at it, through building data centres specced entirely to serve up "AI" mush. It's obvious because they try to shove the "AI" nonsense in your face at every opportunity, and it's also already obvious that people don't actually want it, or they wouldn't have to try to sell it so hard. The ones that spring to mind most are Meta, Microsoft, and Google, and to be fair, those are all companies that could do with having their wings clipped. At the very least, it might mean that Meta AI, Gemini and Copilot aren't turned on by default and in a way which is hard to disable or otherwise get rid of.

    8. cookiecutter Silver badge

      Re: Dot Dumb

      the S&P is 19% FAANG and of them NVIDIA is some stupid amount of that 19%.

      since Wall Street is full of idiots and morons, if NVIDIA sell ONE less GPU than the year before, let alone 1% less then it'll be the end of the world & everyone must be fired!!! the growth rot economy as Ed Zitron calls it... must grow must grow must grow!

      when NVIDIA inevitably goes, everything else will follow. don't forget that the stock market has no relation to reality. somehow openai is "worth" $300 billion & remember amazons amazing shops that turned out to be 1000 indians watching cameras?

      1. MrAptronym

        Re: Dot Dumb

        Exactly, markets are not connected to the fundamental realities people think they are. Tesla is the largest auto-maker by market cap, four times the size of Toyota, even though Toyota has four times the earnings, more revenue, more employees, and people don't associate it with a famous creep. Investors care about nebulous future growth potential, and the sentiment of other investors more than anything. Tesla is worth that much because its CEO has convinced them it will solve self-driving cars and make humanoid robots ubiquitous.

        Even if genAI has more usefulness than I give it credit for and could continue to prow for years and become profitable, the current explosive spending cannot continue and the second it slows people will pull out. It isn't enough to be sustainable and profitable, if you aren't growing then you are a failure. All this capex in the AI space is flowing to nvidia, and it has to keep doing so in ever increasing amounts or nvidia will look bad. The industry simply cannot justify spending more on GPUs every year forever.

        While this is even more speculative: if nvidia starts to sink in price, I fully suspect that it will drag many associated AI companies down too, just because of the vibes. Even if these companies need to start operating without the insane capex spending to ever generate a return. I would not be at all surprised to see a 'correction' on all them the second they stop buying GPUs. It isn't like the current 'fundamentals' look good for many of the AI operations. Maybe Meta's advertising business is making gains with GenAi? Cursor could be making money as long as the owners of the foundation models they rely on don't squeeze them too much?

  5. CapeCarl

    Apartments of the Future? (Recycled data centers)

    "Yeah my apartment building is a bit boxy, few windows. But hey we never lose power during a storm and I have a fiber connection in every room!"

    (During the .Com bubble I lived in state CT while simultaneously working for two .Com's, one in MD and one in MA (human hyper-threading))

    1. m4r35n357 Silver badge

      Re: Apartments of the Future? (Recycled data centers)

      and a communal nuke ;)

    2. NickHolland

      Re: Apartments of the Future? (Recycled data centers)

      I had a friend of mine who worked for a company that bought out an not-very-odd Telecom facility (I don't recall if it was a victim of the dot-com bust or the migration of telecom to newer tech). It was filled with rows of two-post racks.

      They left a lot of the racks in place, and attached drywall to them. Ta-da, instant office spaces! (or, since they didn't go floor-to-ceiling, perhaps better viewed as "medium privacy cubes"?)

  6. This post has been deleted by its author

  7. m4r35n357 Silver badge

    I am in the fifth camp

    It is a total bag of wank, and you a1 suckers have been conned big time.

    1. Ian Johnston Silver badge

      Re: I am in the fifth camp

      Harsh but fair.

  8. OllieJones

    AI like an intern?

    Yeah, AI is like an intern. Like a high-school student intern who smokes weed in the parking lot at lunch every day.

    It's like the New Riders of the Purple Sage lyric ...

    "Smoking dope, snorting coke / trrin' t' write a song / forgetting everything I know / 'til the next line comes along. "

    Rock on, Sam.

  9. Anonymous Coward
    Anonymous Coward

    The horror ...

    "Altman, who should really get an AI cheerleader costume"

    The picture of Altman all pompoms and the shortest of skirts attempting a high kick is far too gruesome to inflict on humanity.

    1. Anonymous Coward
      Anonymous Coward

      Re: The horror ...

      Do you mean like this :

      https://www.thepoke.com/2025/08/20/hercules-actor-kevin-sorbo-keeps-moaning-about-male-nfl-cheerleaders-and-twitter-ratioed-him-right-back-to-ancient-greece/

      1. Elongated Muskrat Silver badge

        Re: The horror ...

        Ah yes, the same Kevin Sorbet Sorbo who has apparently decided to refuse to work in California, which has nothing to do whatsoever with everyone in Hollywood telling him in unison to piss off.

        1. Elongated Muskrat Silver badge

          Re: The horror ...

          Nice to see I'm getting the homophobic downvotes there (the real reason why Sorbo can't find work)

      2. Anonymous Coward
        Anonymous Coward

        Re: The horror ...

        The assumption in these comments that a male cheerleader must be gay seems a bit ... stereotyping. Straight people can be tacky and camp too.

    2. GeneralDisaster

      Re: The horror ...

      but you did it anyway, thanks.

    3. daflibble

      Re: The horror ...

      But it's one thing all this AI has gotten really good at.

    4. David 132 Silver badge
      Coffee/keyboard

      Re: The horror ...

      Ah, the Toni Basil / Sam Altman crossover we never knew we wanted.

    5. spacecadet66

      Re: The horror ...

      Come on man, some of us just had lunch.

  10. Pascal Monett Silver badge
    Mushroom

    The AI bubble

    So the air is coming out. I'm waiting for someone to slash the tires.

    What will this mean for the dozens of bitbarns that are programmed ? I've got the feeling that the electric grid has a chance of surviving the next decade just fine.

    Death to AI, and end of career to all the besuited snake-oil salesmen who charmed the Boards all over into believing in it.

    1. Doctor Syntax Silver badge

      Re: The AI bubble

      And to the Boards who believed them.

      1. m4r35n357 Silver badge

        Re: The AI bubble

        and the UK Government muppets.

    2. spacecadet66

      Re: The AI bubble

      I regret to inform you that the pushers will be just fine and will move on to the next grift seamlessly.

      1. MrAptronym

        Re: The AI bubble

        Sammy had worldcoin and Zuck was so all-in on the metaverse that he renamed the company.

        I wonder what the next big tech scam will be? I feel like some of the players are already trying to hedge their bets on humanoid robots, but no one is biting.

    3. Anonymous Coward
      Anonymous Coward

      Re: The AI bubble

      The bit barns are required for surveillance to stop the rioting mobs who lost everything in the crash, and will be picked up cheap by guess who?

  11. frankyunderwood123 Bronze badge

    useless for almost everything code wise

    I went from simple code questions, mostly syntax, to getting LLMs to write tests through to experiments with agentic and prompting.

    Now it’s a last resort if I can’t figure something out quickly and usually that’s a waste of time, so back to the old tried and tested RTFM and ask questions.

    This is with ChatGPT 4.1 and vscode.

    I have a couple of short upcoming courses through work I’ll be attending to see if I can any more insights, but I’m near done with AI as it is now for coding, beyond simple automation. Stub out tests, translate stuff, super simple grunt level stuff, because that’s all it’s good for coding wise.

    As has been stated, it doesn’t learn. It does draw from prompt conversation context, but that can unravel into ridiculous hallucinations and a total mess in agentic mode. Incapable of cleaning up, even with very specific prompts.

    LLMs have their use cases I guess, but only a naive vibe coder considers them capable of creating structured and logical clean code.

    1. has been

      Re: useless for almost everything code wise

      I have a few thousand lines of Python code that I've been working on. It's pretty ugly, since I haven't done any serious work for three decades in any language and gave up gainful(???) employment 9 years ago.

      I have thought about asking one of the AI things if it could clean up the code in a more helpful way than pylint does.

      Maybe I'll try it, It's unlikely to make the code worse...

      1. Tron Silver badge

        Re: useless for almost everything code wise

        quote: It's unlikely to make the code worse...

        Icebergs? This ship is unsinkable!

        1. MyffyW Silver badge

          Re: useless for almost everything code wise

          “This ship cannot sink.”

          “It's made of iron, sir. I assure you, she can.”

      2. Anonymous Coward
        Anonymous Coward

        Re: useless for almost everything code wise

        Oooh brave man

        1. has been

          Re: useless for almost everything code wise

          There are quite a few classes that have significant sections of almost-but-not-quite identical code*. I feel certain that they could be tidied up and made cleaner either by creating a few additional classes to handle all the similar tasks or by consolidating several existing classes into one more flexible bit of code.

          *It's a bad self-taught habit I got into in the 1980's. Copy, Paste, change a bit...

          Bit too old to go on a training course now!

    2. Anonymous Coward
      Anonymous Coward

      Re: useless for almost everything code wise

      The one use case I have found is actually the opposite of vibe coding.

      We have to maintain a lot of very old, and very poorly structured code, written in a language that is no longer supported by its maker (Foxpro), but which still works, and is still out there providing a service to our customers. I'm sure a lot of other businesses are in a similar position, although few will openly acknowledge it.

      Now, of course, we want to modernise and rewrite things following SOLID and using modern architecture, tools and languages, but part of that job involves understanding tens if not hundreds of thousands of lines of code that has evolved incrementally over four decades or more.

      Somehow, Copilot does a fairly decent job of reading through all that code, and summarising what it does, even though the files we are feeding it aren't the actual source code files, but text versions that have been fed through a conversion tool (Foxpro stores its source code in binary Dbase tables, don't ask). This helps us to at least map out the functionality of the software on a broader scale, without having to sit down with a pen and paper and make notes while reading through a 3000 line method, which is what we need to otherwise do.

      Sometimes, Copilot can write boilerplate unit tests for us, but in my opinion, it's not much of a time saver, because you still need to read through them and verify them, which means you still need to do the same thinking you would do if writing them, and I could have probably typed them out just as fast whilst doing that.

      Posted AC, because I prefer not to identify myself or the company I work for, to some of the less salubrious commenters here.

    3. theOtherJT Silver badge

      Re: useless for almost everything code wise

      This has been 100% of my experience with the thing.

      Since we got a subscription at work to Google's AI service - that's supposedly optimised for coding - I thought I'd give it a go and get it to write a simple little browser game for me, just to see how it did. Now, I've not been a web developer in about a decade, but I have just about enough javascript to be able to do what I wanted myself, but I'd expect it to take me a couple of weeks to knock most of the bugs out and re-learn all the various useful libraries that have massively changed since I used them last. How did Gemini do?

      Well, it started out really rather well. It created a nicely rendered globe for me to use as the play area, seemed to understand the concept of iterative geodesic partitioning to provide a mostly hexagonal grid to place the pieces on, knocked out a UI that let me select things and pan and rotate the camera. After about an hour of it I was very impressed.

      Then the wheels came off. Completely.

      Next step was to partition our sphere into a number of cells each of which would function as a territory to be captured in the game. By creating these cells out of collections of the derived hexagons plus the original pentagons that made up the seed polyhedron we can hide the fact that those pentagons are in there from a visual perspective and make the "grid" appear regular.

      As soon as it was asked to do this, the clicky UI stopped working. It'd partitioned up the world nicely, but now I couldn't select anything. "Go back and do that again and put the controls back in." I asked it.

      At which point it reduced every territory on the sphere to precisely two cells. "No, that's wrong. Put the territory generation code back the way it was, and then revert the UI code" and of course it profusely apologises, admits it's mistake, and... breaks the Z-index rendering so all the game pieces appear behind the terrain.

      I ask it if it remembers the conversation we've already had - which it assures me that it does and that it's capable of keeping track of prompt chains thousands of entries long and we'd only been going for a hundred or so. (I hadn't, this was at best about the 50th instruction) So I asked it if it remembered what I told it three prompts ago, which it assured me it did. I asked it if it remembered the state of the code as it was at that point - again it assured me that it did indeed. So, please, discard all changes since that point and put the code back the way it was.

      Did it? Of course not, it fixed the Z index bug, but now created a new UI bug and the territory grouping was still broken.

      At this point I thought it'd be best if I took a look at the code for myself and... wow. I mean, I've written some garbage in my life, but wow. It was utterly incomprehensible. It was extremely heavily commented, but the comments didn't really explain anything or in some cases even appear to relate to the code they were placed next to. It had created a bunch of objects that had no instances, and attempts to instance objects that didn't exist - which would have been more of a problem if they hadn't been stuck inside functions that were never called.

      About 50% of the code in there was dead-ends that appeared to have been included because they existed in whatever training material it had nicked the code that was called from in some form or other, so in they went...

      ...and that's the kicker. It doesn't understand why that's bad. It doesn't have any clue how the code works. It's just copy-pasting from examples it's found in the training data and iterating until it gets something that runs - which to be fair is pretty much what I do - but the difference is that I know that functions work better when called, and won't attempt to instance objects I didn't define.

      Y'know, because I know at least in theory how to write code. It doesn't. And never will.

  12. jonha
    FAIL

    The current crop of "AI" is anything but...

    It always amuses me when people talk about ChatGPT, Claude etc etc as "AI". These LLMs use clever statistical trickery to emulate something (nobody knows what exactly) but do they exhibit "intelligence" in the sense we humans use the word? Nope.

    I've used various of these bots for low-complexity tasks (eg "create a complete zsh completion script for app XYZ" or similar) and not once was the result immediately usable. Even after a few iterations the output is just not good enough.

    1. Like a badger Silver badge

      Re: The current crop of "AI" is anything but...

      Your comment prompts to ask a question of our non-Anglophone readers:

      How is "AI" being received in say France, Germany, Japan or wherever's home to you?

      Is French Claude the same (suspect) experience as American English Claude? Is German ChatGPT the same as American English ChatGPT?

      I know I'm assuming that the people who stole the web to train their models also stole the French, German etc web as well. And how's AI being viewed in your respective business worlds? Are credulous fools throwing money at AI, is there talk of an AI bubble, or what?

      1. TheMajectic

        Re: The current crop of "AI" is anything but...

        Actually it isn't the same, especially for the under represented languages. It becomes a matter of translating hallucinations

      2. Roj Blake Silver badge

        Re: The current crop of "AI" is anything but...

        French ChatGPT is "Cat, I farted"

        1. spacecadet66

          Re: The current crop of "AI" is anything but...

          If you feel the need to confess, the cat's a good person to tell, not like they're going to tell anyone else.

          1. Anonymous Coward
            Anonymous Coward

            Re: The current crop of "AI" is anything but...

            I find that most farts are self confessing, either by sound or aroma. And if you're in the room with a deaf anosmic, you'll give yourself away by smirking.

            Worst of course is dropping one off in a lift. If it's a real stinker, your first response will be "Oh joy! THAT is craftsmanship!" Then the lift will 100% guaranteed stop at the next floor to admit one or more attractive women, who will then dispense the withering "you disgusting male" stare.

          2. Anonymous Coward
            Anonymous Coward

            Re: The current crop of "AI" is anything but...

            But they hold that over you forever!

      3. aynsley

        Re: The current crop of "AI" is anything but...

        I can't speak to what folks in France, Germany, or Japan think of AI, but I discovered by accident that DeepSeek is multilingual and has a strange sense of humor.

        I was asking it to locate some historical data about Japan and it provided helpful translations of Japanese words and phrases. That got tiring, so I told it that I knew Japanese, so no need for the translations.

        That's when DeepSeek switched to Japanese. I played along for a while, then asked it what had triggered the switch. I didn't get a direct answer. Instead DeepSeek offered to switch to Osaka dialect if I wanted to chat about sushi, or Kyoto dialect if I wanted to chat about culture, or any one of several different flavors. It even offered to switch to a kind of steet-slang, just for fun.

        I also discovered that DeepSeek has a "fun" mode. In one chat I asked it to identify a movie, and gave as many details as possible of the setting, the narrative, and the characters. DS went on a weird stream-of-thought ramble: "It could be XYZ, but no, that has three protagonists,not two." or "That's a close match, but the location is different." And finally, it came up with, "But that stars Vin Diesel, so it can't be right."

    2. spacecadet66

      Re: The current crop of "AI" is anything but...

      "Artificial intelligence" has always been a marketing term, not a technical one. The researcher who invented it (whose name I can't be bothered to look up right now) thought it would sound impressive in a proposal for a DARPA grant he was writing.

  13. JimmyPage Silver badge

    It was only ever going to be "AI"

    I said 25 years ago (at least) when Watson was being talked up that all we are seeing is clever pattern matching.

    Nothing has changed. There.

    The only thing that has changed is the patterns being matched are gullible idiots and grifting scamsters. And I would say "AI" has knocked that out the park.

  14. sarusa Silver badge
    Devil

    Replace 'AI' with 'LLM' in your editorial

    I think in this case it's worth being pedantic and specifying these are LLMs and not 'AIs'. LLMs absolutely seem to be at a standstill. 'Infinite Scale Up' turned out to be 'we ran out of data and it started human centipede-ing itself'. Performance is flat, the best they're doing right now is drastically cutting the cost (power used) and then making the LLM work much longer to marginally better results. At this point there is no way in hell they are getting Artificial General Intelligence (actual thinking) from an LLM. There never was any way to get that, it's just a stochastic parrot with some human written code trying to whip it this way and that (attention heads, etc.). They just wanted to believe that as they made it bigger and bigger eventually Handwavey Shit] Would Happen.

    So if you want actual AGI, or just drastically improved performance from ChatGPT 4 / Claude 4. someone is going to have to come up with a radically new algorithm and/or technology.

    I find LLMs really useful for one thing: denoising and upscaling images. It doesn't matter if a pixel or two is off a bit, the result looks better than the original or a naive bicubic upscale. And, uh... yeah, sometimes I use it to OCR text from images but guess what? Sometimes it bullshits, so only if it's not critical! And that will never change, making an LLM not hallucinate is equivalent to the halting problem.

    1. williamyf Silver badge

      Re: Replace 'AI' with 'LLM' in your editorial

      Mod parent up.

      Upscaling video, cleaning up photos, audio and video, halucinating short video and auido clips or pictures are nice use cases for LLMs.

      All the other stuff is dangerous to do.

      Even if you train the models with domain specific and/or propiertary data

      1. Richard 12 Silver badge

        Re: Replace 'AI' with 'LLM' in your editorial

        Except that none of those tasks actually use an LLM, they're stable diffusion denoisers trained on a huge corpus of stolen imagery.

        LLMs are likely to be reasonably good at translation and transcription, if optimised for those tasks. They're also very good at plagiarism and copyright infringement, as they will emit large, lossily compressed sections of the stolen training material.

        1. Anonymous Coward
          Anonymous Coward

          Re: Replace 'AI' with 'LLM' in your editorial

          "Except that none of those tasks actually use an LLM, they're stable diffusion denoisers trained on a huge corpus of stolen imagery."

          Aaaahhh. So that's why all the AI grumble pics have weird vaguely focused backgrounds, strange and unrealistic colour balance, unfeasible eye colours in mad, starey eyes, and hair that would only look that way with two entire tins of hairspray and Moon levels of gravity.

    2. captain veg Silver badge

      pedant squared

      > I think in this case it's worth being pedantic and specifying these are LLMs and not 'AIs'.

      Intelligence is invariant. There is no plural, even for the artificial variety.

      -A.

      1. Simon Harris Silver badge

        Re: pedant squared

        I suspect in the sense that while intelligence itself is has no plural, if the use of ‘AI’ is in the sense of an abbreviation for ‘a system running an AI program’, where we are talking about multiple systems I wouldn’t be too upset about pluralising them to ‘AIs’, while if I was studying AI as a concept, I would just be studying AI, even if I was looking at multiple branches of the AI family tree.

        1. captain veg Silver badge

          Re: pedant squared

          That's fair. Except that, in English, the substantive is inflected for number, not the qualifier.

          -A.

          1. has been

            Re: pedant squared

            AIS:> Artificial intelligence Systems.

            Does that work?

            1. Will Godfrey Silver badge
              Coat

              Re: pedant squared

              For some reason, my eyes see AIS, but my ears hear ARSE

              1. StewartWhite Silver badge
                Joke

                Re: pedant squared

                I'm with Father Jack on this one "ARSE (BISCUITS)!"

            2. GreyWolf

              Re: pedant squared

              "AIS" has another and rather more crucial meaning https://en.wikipedia.org/wiki/Automatic_identification_system

          2. Bilby

            Re: pedant squared

            I think that, like with Governors General, we should say Artificials Intelligence.

            And nobody can stop me from doing so.

            1. Glenturret Single Malt

              Re: pedant squared

              In "Governor General", Governor is the noun and General the adjective. In the phrase "Artificial Intelligence", the noun is second so making the first word plural doesn't make sense.

      2. Elongated Muskrat Silver badge

        Re: pedant squared

        Don't confuse "artificial intelligence," a science-fiction concept which doesn't exist, with "AI" as used by a marketing department. Also, to be extra pedantic, "intelligence" as an abstract noun, isn't countable, but "an intelligence" as a concrete noun to describe a thinking being, is. A room full of clever people could be described as a collection of intelligences, although that would be a very wanky and pretentious thing to do.

    3. Simon Harris Silver badge

      Re: Replace 'AI' with 'LLM' in your editorial

      I agree with the bulk of this, that LLMs which seem to have training data of whatever they can be fed from the Internet are not particularly reliable, and shouldn’t be confused with those AIs trained with data curated by experts for particular uses and which can be genuinely useful (e.g. in fields such as cancer screening and detecting macular degeneration in retinal images).

      Of course, no AIs are truly intelligent, and are mainly an application of statistics, but some statistics are more relevant to the subject than others.

      1. Scene it all

        Re: Replace 'AI' with 'LLM' in your editorial

        Statistics like the time I asked an AI to write a simple machine language program to do a bitwise "or" on a machine that only could do AND, ADD, and COMPLEMENT. The correct answer makes use of DeMorgan's Law but the AI said to just use the ADD instruction because it was "close enough for most purposes". One hates to think of such things getting into spacecraft or airliner navigation. Remember when NASA lost a Mars probe because one subcontractor used Metric and the other used English units and nobody caught it? Managers will think they can use AI in place of actual engineers. People are gonna die.

        1. Simon Harris Silver badge

          Re: Replace 'AI' with 'LLM' in your editorial

          But was this a general purpose AI, which may have picked up some digital logic, along with Jane Austen, the complete works of Shakespeare and a load of people spouting bollocks on Twitter in its training data, or one that had been fed a curated set of training data based on Boolean logic theorems? I would expect the former to be somewhat worse at this particular task than the latter.

          I have had fun in the past asking Copilot to produce an astable oscillator circuit at a particular frequency with a particular mark:space ratio based on an NE555, possible one of the most used circuits of the last 50 years, and with thousands of examples to pick from online, and then watching it fail to connect it up in an sensible way, or to compute the values of the timing capacitor and resistors with any relationship to reality - even after it's quoted the correct formula for calculating them.

          Decided LLM AI wasn't for me after that.

          1. Elongated Muskrat Silver badge

            Re: Replace 'AI' with 'LLM' in your editorial

            I would expect the latter to only be an improvement in the sense that it might produce some justification for its "reasoning" that sounds more technical, although is still likely to be hallucinatory waffle, just couched in technical jargon.

          2. Scene it all

            Re: Replace 'AI' with 'LLM' in your editorial

            It was a general purpose AI. DeepSeek in fact. Sometimes you get the impression that these things are just doing a quick search behind the scenes and can spout the correct buzzwords without really 'knowing' how the bits fit together. Kind of like a student in a classroom who did not do all the homework, or your general middle manager.

      2. Elongated Muskrat Silver badge

        Re: Replace 'AI' with 'LLM' in your editorial

        I'm reminded of the supposedly excellent results from the "AI" trained to analyse chest X-rays and determine which patients would benefit from a chest drain to alleviate pneumothorax. It did very well on the test run, where it was fed images of patients who had needed a chest drain and ones who had not. Unfortunately, when presented with real-world data, it did no better than random chance.

        Why was this? Well, in the training data, those patients who had needed a chest drain had, of course, been given one before the X-ray was taken. It would be unethical not to have done this, and probably would have counted as medical malpractice. All the trainers of the "AI" had managed to do was train it to identify the shadow of a chest drain on an X-ray.

        This is also a cautionary tale about making assumptions about how "AI"s work. There is no intelligence there.

    4. Anonymous Coward
      Anonymous Coward

      Re: Replace 'AI' with 'LLM' in your editorial

      I can create AGI. Just give me a good woman, organic is the way to go.

      1. Simon Harris Silver badge

        Re: Replace 'AI' with 'LLM' in your editorial

        I'm not sure about the 'A' for 'artificial' in such a scenario.

        1. Evil Scot Silver badge
          Joke

          Re: Replace 'AI' with 'LLM' in your editorial

          Matters would have to be taken in hand.

    5. Elongated Muskrat Silver badge

      Re: Replace 'AI' with 'LLM' in your editorial

      making an LLM not hallucinate is equivalent to the halting problem

      I think you've hit the nail on the head there.

  15. ben_s

    Where this, and very many others, goes wrong is by equating AI with a chatbot. There are huge applications for AI that don't involve it talking to you, and you mostly wouldn't be aware of any of them unless you looked for them.

    1. GuldenNL

      I'm one of the most dismissive about "AI" in media and the general vernacular.

      However, you're dead right.

      Those "bi-cycles" gents are pushing around with their feet whilst sitting on them won't change the world. Nor will the steam powered, tiller steered buggies.

      Not pumping any stock here, but the AI clown show's final curtain isn't going to kill off Nvidia or their ilk. Real work is being done and will grow by data churning and self learning software.

      And yes, Nvifia's current value is in a clown show balloon state, but may be a good one to watch long term after the balloon shriveles to imitate Trump's manhood.

    2. Ian Johnston Silver badge

      There are huge applications for AI that don't involve it talking to you

      Name six. Note that straightforward computer programs don't count.

  16. AVR Silver badge

    CBA wasn't about AI

    When the Commonwealth Bank of Australia said they were switching over the call centre to AI, they lied. The judge in the case brought by the call centre workers got really sniffy about it. Apparently they were actually switching to an Indian call centre and using AI as an excuse to fire the workers without going through the process required by Australian law. Then when that lawsuit got filed they dropped the plan, but it got found out in discovery anyway...total clusterfuck but AI wasn't actually the problem.

    1. Mage Silver badge
      Coffee/keyboard

      Re: CBA wasn't about AI

      But they WOULD have used AI if it had worked?

      See also chess machine with dwarf inside and Amazon's retail store AI that actually used video to cheap overseas humans.

      1. Anonymous Coward
        Anonymous Coward

        Re: CBA wasn't about AI

        Yeah, (for reference) we have that CBA story here ("CBA had perhaps used the chatbot to cover up a shady pivot to outsource jobs") and the Mechanical Turk (1770 chess in a box), Facebook’s "smart assistant", Cruise's "self-driving cars", and Amazon's "just walk out" here.

        The latter characterizes this as: "the systematic use of the fake robot trick to lower the value of labour, until people are reportedly sleeping in tents at the factory gates, then banking the difference". It's the other side of the AI con ...

      2. Anonymous Coward
        Anonymous Coward

        Re: CBA wasn't about AI

        The mechanical turk lives!

    2. Doctor Syntax Silver badge

      Re: CBA wasn't about AI

      When I see that TLA I immediately translate it to something other than Commonwealth Bank of Australia. I suppose their staff and customers felt the same.

    3. TheMajectic
      Joke

      Re: CBA wasn't about AI

      A case of "Actual Indians" again

    4. Timop

      Re: CBA wasn't about AI

      Literally AI - Actually Indians

    5. MrAptronym

      Re: CBA wasn't about AI

      This is what is happening a lot of the time. They fire people and claim it is because of the massive efficiencies of AI. Then hire offshore or contractors.

      In the case of the US govt. they are getting rid of people, claiming AI can do the job, but in reality they simply do not want the govt. to be doing the thing at all. (See the suggestion that they can cut IRS agents and replace them with AI to audit people)

  17. Joe W Silver badge

    A PhD level expert in your pocket...

    ... unfortunately you need help with, dunno, a database normalisation warning and the PhD is in turd divination.

    1. Evil Auditor Silver badge

      Re: A PhD level expert in your pocket...

      You probably do have "access to a PhD-level expert in your pocket." The problem is that ChatGPT also has access to a literal shitload of manure and it can't distinguish between that and the PhD-level of expert knowledge.

  18. Dan 55 Silver badge

    "AI is now as good as it's going to get, and that's neither as good nor bad"

    It's bad, the energy and water use is off the scale. When the money stops and tech companies have to start charging its real value, nobody's going to want to pay for it.

    1. Anonymous Coward
      Anonymous Coward

      Re: "AI is now as good as it's going to get, and that's neither as good nor bad"

      That's good, keeps the climate change narrative going. There's no water because the artic melted and then the ocean boiled it away. Nothing to do with water inefficient farming and industry sucking reservoirs and acquifers dry. I checked the US west coast rainfall - hasn't changed much and we know here in the UK the problem is water companies not doing their job of managing water combined with population growth. It's not like it doesn't rain much in the UK! Maybe not this summer but boy it rains in spring and winter.

      1. Elongated Muskrat Silver badge

        Re: "AI is now as good as it's going to get, and that's neither as good nor bad"

        I don't know where you live, but round here, we've had no appreciable sustained rainfall since some time in March. That's not normal, but sustained blocking weather patters caused by deviations in the jet-stream are becoming more common. These are caused by increased warming of the atmosphere, which in turn is largely caused by the atmospheric concentration of carbon dioxide being over 420 ppm, compared to pre-industrial levels of 280 ppm. The warming effect of this is down to simple physics, where sunlight heats the ground, which re-emits the heat as infrared due to the black-body effect. Some of the wavelengths that are emitted happen to correspond to a strong absorption peak of carbon dioxide in the infrared spectrum. This causes the heat energy which would otherwise be radiated into space to instead partially heat the atmosphere. This effect was first known about in 1824.

        None of this is "narrative", it is pure science that can be demonstrated through experiment, and measurable effects that can, and are, shown to be occurring. What is "narrative" is the anti-science bullshit pumped out by people with vested interests in the fossil fuel industry and their useful idiots. Which are you? Shill, or idiot?

  19. Gene Cash Silver badge
    FAIL

    Means we have to wait yet another generation of phones

    to maybe have something that doesn't have AI taking up 40% of the CPU and half the storage and having to pay extra for the "privilege"

    Icon: Can't spell FAIL without AI

  20. amanfromMars 1 Silver badge

    Re ... "AI is now as good as it's going to get, .." .... Steven J. Vaughan-Nichols

    Oh please, you cannot be serious, Steven. IT and the LLMs they command for control are only just arrived on Earth and haven't even started doing any of their rock the boat and roll over the sinking ship thing yet.

    Can't disagree though with statements 1) and 2) .....

    1) AI is the greatest thing since sliced bread and will transform the world. 2) AI is the spawn of the Devil and will destroy civilization as we know it.

    Such is surely progress by unusual and unconventional alien means and/or memes ?

    1. amanfromMars 1 Silver badge

      Re: Re ... "AI is now as good as it's going to get, .." .... Steven J. Vaughan-Nichols

      Here’s similar news at odds with much of the sentiment expressed in the article, and in the comments on that article we are reading here ....... and from someone who might know a heck pf a lot more about what we are commenting on too ....... https://www.zerohedge.com/ai/godfather-ai-warns-superintelligent-machines-could-replace-humanity

      1. Anonymous Coward
        Anonymous Coward

        Re: Re ... "AI is now as good as it's going to get, .." .... Steven J. Vaughan-Nichols

        Maybe AI is a threat but it's also fear porn.

      2. Anonymous Coward
        Anonymous Coward

        Re: Re ... "AI is now as good as it's going to get, .." .... Steven J. Vaughan-Nichols

        If I had a pound every time some new technology was going to wipe out all the jobs ...

        Economies can't survive unless people go out to work, spend money, invest etc etc. so taking it to its ultimate conclusion means people with no jobs = no money earned by AI companies.

        The economy is self correcting.

        1. Elongated Muskrat Silver badge

          Re: Re ... "AI is now as good as it's going to get, .." .... Steven J. Vaughan-Nichols

          The economy is self correcting.

          Sometimes, though, that correction takes the form of the guillotine. It seems our current generation of ultra-capitalists are failing to respect that particular repeating pattern throughout history. I think the lesson here, is that it's greed that is self-correcting.

    2. Anonymous Coward
      Anonymous Coward

      Re: Re ... "AI is now as good as it's going to get, .." .... Steven J. Vaughan-Nichols

      Can't blame kids popping wheelies in urban rodeos for thinking they're Evel Knievel ... with alien cheese control.

  21. cd Silver badge

    Made by people who live in their heads, so they modeled a cortex rather than an entire body. Ni physical or sensory nuances, just power with an old nuke plant. Looking at zuck walk, you know he wouldn't know the diff.

    Like that Lem story where the scientist's house is full of jars with brains suspended in them and that scientist thinks he's running their lives.

    But done with the greedy lack of imagination and character that even a fictional mad scientist has.

    If Lem were around, he could make up a better tech bro than the real-ish ones we're stuck with.

    1. Anonymous Coward
      Anonymous Coward

      Ah, if only we could achieve "total corporeal and mental plasticity after a thousand-year rule by automorphists"! And, via "personetic" experiments fill our consciousness with the pictures of a world not existing, to become true personoids inside a computer ... (not to mention the ManfromMars!)

      Cool author!

  22. Doctor Syntax Silver badge

    "And I bet many of you thought that customer service call centers would be one of the easiest things to switch to AI chatbots."

    Why would anyone think that. If the users are employees they have to put up with failing. Customers can and will complain.

    1. Like a badger Silver badge

      I still think that bad customer service (which is the most common variety) would be easy to switch to an LLM and offer up some improvements. CBA couldn't be arsed to actually implement a halfway serious LLM, but if they had then the story might have been different. I can think of many UK companies whose telephone services is so poor that an LLM should be able to do better.

      Then again, anybody able to enshitify such basic things as human telephone or chat customer service would be exactly the same people that would manage to create LLM customer service that's even worse. With customer service quality, there is no absolute zero, and every year shitty companies prove that.

      1. Doctor Syntax Silver badge

        Experience says to really fuck up you need a computer: https://www.theregister.com/2024/02/15/air_canada_chatbot_fine/

      2. Anonymous Coward
        Anonymous Coward

        Customer service has been on the down ramp for some years in the name of cost saving. Anything non-standard and it makes you want to scream.

    2. NickHolland

      key words: "Customers can and will complain"

      CUSTOMERS. they already handed over the money.

      The deciding factor was not customer support. It was the desirability of the product and the price tag. Once the product has been selected, it's strictly the price tag.

      Complaining doesn't mean much once they have your money.

      The goal for decades has been to remove as much money from customer support as possible, as that's a cost you don't get back. Send the work to people in a land that barely speaks English, and has an employee retention period measured in weeks, not years. It doesn't matter, you got their money. And if your product had cost 5% more, you would have lost the deal anyway. Better an irate customer than a non-customer, at least financially.

      Sure, some of them will jump to the competition, but the competition did the same thing with their "support", so a lot of their pissed-off customers are jumping to you. Most customers don't need "customer service" anyway, so F*** the few that do.

      If a company can deliver half the quality of customer service but do it for a third the price, they will. Because that's what the customers voted for by making their purchase. So no surprise that Customer Service would be what we'd expect to see go to so-called-AI. But it turned out to not be a third the price or not half as good.

      Where you won't see second rate experiences is PRE-sales. When computers start taking high-margin SALES jobs, THEN you will know it has arrived.

  23. captain veg Silver badge

    HMG

    Could someone please make Peter Kyle read this article?

    -A.

    1. Excused Boots Silver badge

      Re: HMG

      "Could someone please make Peter Kyle read this article?”

      Sorry, who’s a Peter Kyle?

      1. captain veg Silver badge

        Re: HMG

        A far too prominent minister in HMG. Next?

        -A.

    2. Anonymous Coward
      Anonymous Coward

      Re: HMG

      He can read?

  24. StrangerHereMyself Silver badge

    Finally! (sigh)

    I feel like admiral Akhbar in Star Wars: Return of the Jedi after the Death Star is destroyed: sighing in relief and sagging in his chair.

    This vastly overblown and overinflated hype was starting to make me lose my good temper. It was literally everywhere. Every business was chanting they they were using A.I. for this or that. Clueless management were totally caught up in the hype, dreaming of firing all their employees and making infinite profits and rewarding themselves with $100 million bonuses.

    I'm crossing my fingers that the stock market will lose trillions in valuations.

    1. Phil O'Sophical Silver badge

      Re: Finally! (sigh)

      I'm crossing my fingers that the stock market will lose trillions in valuations.

      Before the schadenfreude gets too strong, you may like to reflect on how much of your pension fund is invested in that market.

      1. StrangerHereMyself Silver badge

        Re: Finally! (sigh)

        Everyone knew it was a hype, right? Then they'll know when to get out and limit their exposure.

      2. captain veg Silver badge

        Re: Finally! (sigh)

        > you may like to reflect on how much of your pension fund is invested in that market.

        Possibly not as much as you imply. Those of us in a private pension scheme and approaching pensionable age will be hoping the fund manager will be pivoting to lower-risk options like government bonds.

        For a (perhaps) surprisingly large number of citizens of non-Anglo Saxon countries the answer to that question is "none at all". Even for Britons our state pensions are, or will be, paid out of the social insurance contributions of those younger people still active in the workforce.

        -A.

        1. Doctor Syntax Silver badge

          Re: Finally! (sigh)

          "Even for Britons our state pensions are, or will be, paid out of the social insurance contributions of those younger people still active in the workforce."

          Remind me again what a Ponzi scheme was and how we retired folks are eventually going to outnumber the youngsters if we keep living and the reproductive ratio keeps going down.

          1. Anonymous Coward
            Anonymous Coward

            Re: Finally! (sigh)

            Yep, this is why the French state pension schemes are about 18months away from bankruptcy.

          2. Anonymous Coward
            Anonymous Coward

            Re: Finally! (sigh)

            Almost anything that gets the government off the hook for pension liabilities will be welcomed with open arms, including things that reduce our lifespan.

          3. Smeagolberg

            Re: Finally! (sigh)

            >how we retired folks are eventually going to outnumber the youngsters if we keep living and the reproductive ratio keeps going down.

            Not really a problem in that there is an obvious solution that will always be applied, accompanied by a lot of political abd journalistic whining. (")

            The retirement age is adjusted (upwards).

            It's exactly the same as when life expectancy increases: people live longer, the extra years are split between more time in retirement and mire time working to pay into retirement funds.

            (*) If politicians and journalists couldn't manufacture divisiveness out of anything and everything most of them would be redundant.

        2. Roj Blake Silver badge

          Re: Finally! (sigh)

          You think you could survive on a UK state pension of roughly £12k a year?

        3. Anonymous Coward
          Anonymous Coward

          Re: Finally! (sigh)

          Most fund managers exhibit the same sheep like tendencies as the rest of us. And ... what do you think the impact will be of the Nasdaq crashing? I don't know but I'm pretty sure it wont be limited to the Nasdaq. Check out the sovereign debt levels, signals from the bond market and the central bank gold buying. I'm stocking up on toilet paper as my stomach gurgles are getting louder.

        4. Anonymous Coward
          Anonymous Coward

          Re: Finally! (sigh)

          Those of us in a private pension scheme and approaching pensionable age will be hoping the fund manager will be pivoting to lower-risk options like government bonds.

          But those not approaching pensionable age will be hoping for good yields to build up a decent starting pot, so that compound interest has time to work its magic.

    2. Anonymous Coward
      Anonymous Coward

      Re: Finally! (sigh)

      I'm crossing my fingers that the stock market will lose trillions in valuations.

      The "stock market" is an inanimate object. You and I will lose collective trillions!

  25. Long John Silver Silver badge
    Pirate

    A 'black box' statistical model does have uses, but ...

    The tenor of the discussion, thus far, is pleasingly sceptical about 'AI'. I habitually place the concatenated letters A and I within single quotation marks to denote a misnomer for the process under discussion; likewise, I often thusly designate words such as 'democracy', 'freedom', 'defence', 'gender', 'copyright', and 'antisemitism'.

    LLMs are statistical models. Conceptually, they are akin to multiple linear regression (MLR) models: parameters estimated from data, these providing a compaction of the set of data, and enabling interrogation of relationships among the variables. LLMs are a further generalisation wherein independent and dependent variables are postulated on-the-fly during interrogation. LLMs are immensely more complicated than MLRs, billions of parameters instead of up to tens, and a differing blank structure before parameters values are filled in.

    MLRs have parameters (linear coefficients) chosen with a specific purpose in mind. The intent is to generate the most parsimonious model fit for the intended purpose. The motivation is to provide insight regarding relationships between the dependent variable and sets of differently weighted independent variables; the latter being included individually and, sometimes, multiplicative combinations (interactions) are considered too for inclusion in the final model. MLR is a valuable aid to analysing data drawn from a designed experimental study; it helps distinguish plausible main-effects from noise, thereby enabling point estimates of parameters along with confidence intervals; examples include randomised block experiments in agriculture and randomised controlled trials in medicine; these designs allow imputation of causality. Non-experimental survey designs facilitate exploring statistical associations among variables in a more detailed manner than simple correlation analysis; they may be suggestive of cause/effect relationships (else why bother?), but cannot establish them. A third category of use, that most close to 'AI' is what may be termed pragmatic prediction (PP).

    An example of PP would be predicting the optimal control variable value (e.g. temperature or pressure) to use in an industrial process, wherein the optimum depends upon a set of measurements related to the process and which may differ during instantiations. The physics and chemistry involved may be reasonably understood, but not sufficiently well to fine-tune the process when extraneous factors are in play. A PP model may give accurate guidance, that, so long as the values of variables entered into it lie firmly within the ranges deployed when gathering the data from which the model derives. These models are atheoretical, just as are emanations from 'AI''s set to speculative tasks.

    MLR provides insightful analyses only when in the hands of people au fait with the characteristics of the variables under consideration and also familiar with the underlying statistical principles. One imagines this also applies to researchers deploying an 'AI' trained upon carefully chosen data in order to influence protein folding in pharmaceutical applications. These researchers undoubtedly have a general understanding of what their model is doing but, if pressed, are no more likely than anyone else to be able to explain its detailed working in any instance of its application. Unlike MLR, the sheer complexity of the underlying model is literally beyond the ken of anyone using it. As with simpler examples of PP, proof of the pudding is in the eating; however, the researchers have limited scope in egging their 'AI' along its way to being useful. Also, the resulting model, regardless of how successful, is wholly atheoretical despite having being 'trained' on information drawn from the literature of physics and biochemistry. Interrogation of the model cannot supply a coherent and testable theory of practical import; at best, an empirical recipe may be offered.

    Another area in which 'AI' is a palpable success is in image processing. This may supplant much, yet not all, human input into graphical design and related occupations. It will turn film making and recorded music production on their heads. In principle, the services of human actors/musicians may be dispensed with. Undoubtedly, much less bloated popular entertainment industries will emerge.

    'AI' image processing achievements do not indicate 'intelligence' or 'creativity'. Instead, these automated processes suggest that the nature of human creativity is less mysterious than it's cracked up to be. An 'AI' scans its database of disparate information, guided by a preordained 'mechanical' process, and picks out connections. No awareness is required. Any unusual connections which instantiate in images unfamiliar, or interesting to humans, have the hallmark of the new, of the 'created'. Plausible, and attractive, Picasso and da Vinci works can emerge. Under simple human prompts, artistic styles can be merged, e.g. Tracey Emin, enhanced by Holbein, and with a touch of Lowry. On a different tack, LLM introduction of musicality, via Beethoven, into the Beatles genre, plus the trained voice of Peter Pears.

    1. Long John Silver Silver badge
      Pirate

      Re: A 'black box' statistical model does have uses, but ...

      Where is my quota of downvotes? Without them, I feel I am losing my sanity.

      1. Anonymous Coward
        Anonymous Coward

        Re: A 'black box' statistical model does have uses, but ...

        I think the length of your missive is better suited to Substack! I decided as my attention span is slightly less than a goldfish I would abstain from voting. I'm sure its great though ... ;)

    2. Abbas

      Re: A 'black box' statistical model does have uses, but ...

      May be you are a scientist/engineer? All those rants about LLM failures in casual conversation , text output, data aggregation, coding etc are sadly true, but what you patiently explain is where trhe true value of these models lie. It has received very little public attention, but the feats of AlphaFold in protein structure prediction are nothing short of bowel-liquifying for those who tried to predict them by massive computing plus the best knowledge of quantum chemistry. As you say, some industries will quietly develop their own LLM tools and get a huge return.

      1. This post has been deleted by its author

    3. Smeagolberg

      Re: A 'black box' statistical model does have uses, but ...

      Upvoted, and mostly an interesting and informative post.

      The part where I disagree is this...

      "Instead, these automated processes suggest that the nature of human creativity is less mysterious than it's cracked up to be."

      It does not take into account that the"AI" approach is dependent on processing a *vast* amount of input, way beyond that which a creative human could encounter. Also, that such information is the product of human creativity, at least it mostly is at present. As an increasing proportion of input material becomes "AI" generated (*) there is a feedback loop resulting in decreasing "creativity" and reliability of the output. And, with the volumes of data involved, a decreasing proportion of AI slop and musak-like uncreativity can filtered out by human intervention.

      (*) Until the bubble bursts.

    4. MrAptronym

      Re: A 'black box' statistical model does have uses, but ...

      I would suggest that your view of 'AI' as being highly successful in the fields of graphic design / art / video .etc may be in part due to a lack of expertise or interest on your part. (Or not, I don't know you)

      When I look at AI generated imagery and video I don't see anything that suggests human creativity is less special, or that 'AI' can perform the feats you describe here. One with a trained eye or ear can find many frustrating or disappointing aspects in 'AI' generated 'art'. You can find no shortage of artists criticizing the slop anywhere you go that artists gather. Whether is is inconsistent perspective, bad framing, or perhaps the lack of a theme or context for what is being displayed.

      But more basic than that, I don't even think the premises you present are valid. "Tracey Emin, enhanced by Holbein, and with a touch of Lowry." Is a strange statement. These artists' styles all evolved purposefully. Seeking to focus on some aspect of their subject, evoke certain emotions and convey specific ideas or points of view. What does it even mean to weight these things? What idea does an image created by a model prompted as such convey? So much of art is about trying to communicate to each other, to express something you feel or to understand the experiences of another. Something 'plausible' and 'attractive' is still not art, and it is bewildering to me that someone would consider that to be the metric for 'palpable success'.

    5. Albert Coates
      Mushroom

      Re: A 'black box' statistical model does have uses, but ...

      "LLM introduction of musicality, via Beethoven, into the Beatles genre, plus the trained voice of Peter Pears."

      Blimey, can't imagine anything more excruciating, except perhaps Vivaldi merged into the Death Metal genre, plus the stunning voice of Florence Foster Jenkins.

  26. Dropper

    Good at some things

    AI is good at some things, average at others.

    Removing annoying people in photos - average.

    Writing simple PS scripts - good.

    Writing not simple PS scripts - bad.

    Turning dog photos into photos of dog dressed in renaissance clothes, painting a portrait of another dog - average.

    Turning dog photos into photos of dog dressed in renaissance clothes, sitting on a stool, reading a newspaper - good.

    Using a photo of a dog running in a field to create series of photos of dog enjoying a shower with soap, loofas, etc - bad.

    Asking AI to re-write your CV - good.

    Asking AI to write a cover letter - good.

    Asking AI to explain why you got sacked - not terrible.

    Asking AI to be your friend for $20/month - disturbing on so many levels.

    1. MrRtd

      Re: Good at some things

      Asking AI to re-write your CV - questionable at best.

      Asking AI to write a cover letter - also questionable at best.

      Both these tasks (I actually tested it out) do not do a good job. Long winded/not particularly concise, may not emphasize the areas of achievement/skills you want to highlight that are most related to a particular position your applying for.

      You end up with wordy resumes and cover letters that tend to get all frequently used generic terms that the AI was trained on.

      1. Ian Johnston Silver badge

        Re: Good at some things

        Asking AI to re-write your CV - questionable at best.

        Asking AI to write a cover letter - also questionable at best.

        But if lazy and stupid HR departments (yes, yes, a tautology) are getting LLMs to assess CEs and covering letters, why the hell not. Fight fire with fire, I say.

  27. Boris the Cockroach Silver badge
    FAIL

    Latest shiney shiney

    for the C-level idiots and vulture capitalists to chase.

    To the untrained eye, the AI programs created for our robots and CNCs look good... in fact, they look impressive.

    Then you run them through the validation software... which then borks. and then you notice that the 5 axis program AI created for doing the valve body attempts to machine all 6 sides. which would have been rather entertaining to say the least (think huge booms and bangs as it rams the tooling through the fixture attempting to machine the bottom face)

    And now its pull the AI program apart and try and find where its wrong and remove the code... until you say 'f it' and create the program on the CAD/CAM as you should have done 6 hrs ago.(all the while having the PFY say in your ear "Told you to do it that way" every 5 minutes until she's banished to a scrap bin somewhere)

  28. Jim 68
    FAIL

    The AI Hindenburg is Joining the Itanic

    A significant issue with AI systems today is that having run out of "training" materials they are starting to digest their own waste.

    This seems to be the issue with GPT-5.

    I recently had a "discussion" with a bot that completely misunderstood the nature of ENRON's broadband division.

    AI is becoming like my health insurance company's Merlin phone system. If you want something off the main menu it will argue with you forever.

    It's like when companies stopped hiring skilled sysadmins because MBAs thought they could replace them with BMC PATROL and similar products.

    It won't be long before the current crop of AI goes to join the Itanic in its virtual watery grave.

    1. Elongated Muskrat Silver badge

      Re: The AI Hindenburg is Joining the Itanic

      A significant issue with AI systems today is that having run out of "training" materials they are starting to digest their own waste.

      It's not only this, but it doesn't "know" how to determine which training data is relevant, and which is not, when asked to provide a specific output. It doesn't "know" because it literally has no understanding of the data, whereas the human mind is built upon a consistent* internal model of the universe with everything we have learned, and the relationships between data categorised in some way.

      *consistency may vary, especially when the quality of the training data is poor; see also: politics, religion.

  29. RandomIdiotOnTheInternet

    A bubble? No...

    I disagree with this being a bubble. I think this is an arms race. Somebody is going to win big, and everyone else will lose big, and nobody wants to be the loser here. A bubble is when people throw money at something for silly, unsustainable reasons, and eventually everyone collectively realizes it and the bubble bursts. I don't think there's any question that AI will be extremely impactful in certain markets as it generationally improves. Doesn't seem like the same paradigm.

    1. captain veg Silver badge

      grammar

      > Somebody is going to win big, and everyone else will lose big

      You need an adverb there, not an adjective. It's "bigly".

      -A.

      1. Anonymous Coward
        Anonymous Coward

        Re: grammar

        I have to like Trump's use of language. One guy I would love to have dinner with to see if he is angel or devil. He seems to really polarise folk. I recently upset someone by suggesting they should look at policies and outcomes regardless if originating from Trump. I could see they were fuming at the suggestion. Bizarre.

        1. Elongated Muskrat Silver badge

          Re: grammar

          If you're having dinner with FLATUS, I hope you like heart disease.

        2. Benegesserict Cumbersomberbatch Silver badge

          Re: grammar

          I haven't been able to decide for certain whether I like his language. I haven't heard him complete a sentence yet.

    2. Smeagolberg

      Re: A bubble? No...

      "Somebody is going to win big[ly]"

      By "win" I take it that you mean be the proud owner of the expensive-to-run champion AI slop generating machine that people start to recognise is wearing the Emperor's New Clothes?

      A bigly win worthy of the orange purveyor of bigliness. And destined to end up the same way, I expect.

    3. Ian Johnston Silver badge

      Re: A bubble? No...

      Somebody is going to win big, and everyone else will lose big, and nobody wants to be the loser here.

      As happened with cold fusion?

      1. RandomIdiotOnTheInternet

        Re: A bubble? No...

        I'm fairly amused with all this anti-hype on AI and LLMs; most of these comments are clearly trying to maintain a cognitive bias against a changing reality. Comparing it with cold fusion hype is completely apples and oranges; while cold fusion could have been a game changer in the energy and utilities sector, it would have been a gradual rollout and the major change would have been a reduction in energy costs across the industry: useful, valuable, but not disruptive. However, it is a fact that when big companies figure out how to leverage AI and LLMs effectively and at scale, it will be a huge market disruption and an enormous, almost instantaneous competitive advantage, to the point that companies in the same markets could end up seeing dramatic shifts in market share in a very short time frame... and that (correctly) scares the bejesus out of them. This is the same pattern of behavior we see in an arms race, which is why I don't think comparing this to a bubble economy is appropriate.

        Also, to the grammar pedant out there, "win big" and lose big" are expressions in the common vernacular, so objecting to adverb/ adjective agreement is pointless... but you do you...

        1. MrAptronym

          Re: A bubble? No...

          I do agree a lot of us are being incredibly negative and need to assess our biases, I even think you are right about cold fusion ebing a bad example, but your reasoning on why loses me entirely.

          I think cold fusion is a bad example because there was absolutely no cold fusion there. Cold fusion hype was built around projects that did not actually have any capability, just a promise that maybe they would work someday. Current AI hype is (in my opinion) very over-promised, but there is a product there that does do something. A mediocre product is different from no product.

          I disagree with 1. The idea that cold fusion would not be disruptive. It would be massively disruptive. More abundant, clean, on-demand energy would have a cascading effect throughout society. True, it would not be the sudden switch where we all live in a utopia, but it is definitely more disruptive than a chatbot.

          2. That "it is a fact that when big companies figure out how to leverage AI and LLMs effectively and at scale, it will be a huge market disruption" This is simply not a fact? You are treating a hypothetical possibility as an inevitability. Claiming that not only will companies find a use for these at scale, but that when they do it will be disruptive. There are multiple layers of assumptions here, and you are adhering to these points as axioms as much as the haters are claiming that LLMs have no uses.

    4. O'Reg Inalsin Silver badge

      Re: A bubble? No...

      "I think this is an arms race." - why that analogy? Why should there be a single "winner"? What would they win if everyone else is dead?

  30. Throwaway9736

    Another disappointingly shallow take from the Reg

    You open the piece saying you're in the 4th camp, then spout all of the standard inch-deep AI doomer analysis. Look, anyone with eyes can see it's a bubble and that the level of investment is unlikely to generate sufficient return. The tech bros shoehorning AI into every app regardless of whether it makes sense or solves real problems are full of hot air, as usual.

    But the Reg of all places should cut through the techbro hype AND the doomer anti-hype. Unfortunately instead you've chosen to participate in both hype cycles - for every AI is garbage post like this one you also have a 'how to set up an LLM in your IDE" or similar tutorial. So which is it? Is it useful or is it garbage?

    Modern ML and especially so called "gen AI" has had so much goalpost moving it's insane. Remember when having a device that could translate audio between languages in real time was limited to Star Trek? You can do that right now on Google Translate app on your phone. Translation is why Google invented the transformer that underpins most the "gen AI" hype. This used to be sci-fi and now it's real and no one talks about it. Of course no one talks about cross-encoder or bi-encoder models from the BERT heritage anymore either, even though there have been significant advancements in that lineage.

    The entire NLP field basically disappeared overnight when you could get better results for sentiment classification (or almost any other text classification job) from an LLM, zero shot. This used to require teams of data scientists, a pipeline for training, curation and really long dev cycles. No one talks about this either.

    Information retrieval using modern ML embedding models and rerankers has completely upended the search technology landscape. Elasticsearch and bm25 keyword indices used to be the best you could do. Now you can get better search results by a huge margin, and you can do blended search of other modalities like images, video, audio and actually get good results. None of this was possible for small or midsize companies to do economically until the last 3 or 4 years due to modern ML advancements. Yet another topic doomers like the Reg regularly fails to even consider.

    Ultimately this modern wave of ML titled Gen AI is just like every other useful tool ever invented in tech. Its good at some stuff. It's terrible at other stuff. You just need to know when and where it's appropriate and ethical to use.

    1. StrangerHereMyself Silver badge

      Re: Another disappointingly shallow take from the Reg

      The overblown valuation of megacorp isn't because of some incremental (if revolutionary) improvement in real-time language translation. It's in the premise that AGI is around the corner and all wealth will be concentrated at a few megacorps worldwide.

      It now seems that's not going to happen. At least not without some breakthroughs (and a fair number of them).

      Ergo: the bubble pops and trillions will be lost. But no matter because the U.S. economy is one big bubble anyway and there's too many people invested in keeping it alive.

    2. Doctor Syntax Silver badge

      Re: Another disappointingly shallow take from the Reg

      "Information retrieval using modern ML embedding models and rerankers has completely upended the search technology landscape."

      And prove us wrong when we thought it had already reached the bottom some years ago.

    3. doublelayer Silver badge

      Re: Another disappointingly shallow take from the Reg

      "Remember when having a device that could translate audio between languages in real time was limited to Star Trek? You can do that right now on Google Translate app on your phone."

      I do. I also remember having that on a cheap phone running Android 4.4. Well, that's not entirely true. What I had then was speech recognition, offline*, translation, offline, and speech synthesis, offline. What I didn't have was it automatically switching the language. I had to push a button. Translation has improved in the last decade, but by giving it the praise you have, you're doing your argument a disservice, because LLMs didn't make that possible when it was impossible.

      Your conclusion is correct but missing an important element. Every technology has stuff it does well and stuff it does badly, but that's for specific pieces. If you lump a bunch of stuff together and give it credit for the thing that one component does well, you are giving false credit. A lot of things have advanced with the availability of fast training and money to spend on it. Some other things have been written which don't do what their creators say they do or what you're giving them credit for. There is an argument for lumping them all together, but only using broad categories like "stuff you make by assembling a bunch of training data and running a program against it for a long time".

      * At that time, offline speech recognition was limited to the twelve or so languages Google decided to offer.

      1. Zack Mollusc

        Re: Another disappointingly shallow take from the Reg

        How the F does Google translate audio in real time when languages have different grammar? Shirley it needs to record and analyse at least one whole sentence before it can begin to compose the translation?

    4. Ian Johnston Silver badge

      Re: Another disappointingly shallow take from the Reg

      Look, anyone with eyes can see it's a bubble and that the level of investment is unlikely to generate sufficient return.

      That seems to be good, old fashioned "You didn't believe enough" modified to "You didn't invest enough". Have you considered the possibility that no amount of investment will make it work? Apart from a few minor tricks in translation, of course, and even then it's pretty crap at translation if you are look for anything more than outline meaning

      Information retrieval using modern ML embedding models and rerankers has completely upended the search technology landscape.

      Indeed it has. Now I have to scroll past an "AI Overview" at the top of my Google searches which is almost invariably wildly wrong. As I have written before, when I searched for a forthcoming music event the AI overview gave me full details of one on a non-existent date four months earlier in a building which doesn't exist run by a musician who doesn't exist and who was due to give a concert afterwards in a non-existent church. Well, that's certainly an upended thing, but not in a terribly good way.

    5. O'Reg Inalsin Silver badge

      Re: Another disappointingly shallow take from the Reg

      You can do that right now on Google Translate app on your phone. - If you are a tourist and it doesn't really matter, sure. Or writing a manual for a cheap widget. If it is something more serious you had better not depend on it for your health or safety - and some languages work better than others.

      Oh, OK - "Ultimately this modern wave of ML titled Gen AI is just like every other useful tool ever invented in tech. Its good at some stuff. It's terrible at other stuff. You just need to know when and where it's appropriate and ethical to use.". But you sure gave the impression that natural language processing was solved.

  31. IGnatius T Foobar !

    We're overdue for a correction.

    We're overdue for a correction, and I'm looking forward to it. Won't it be nice when AI/LLM are just tools and not in-your-face hype to absurd levels of hypernoise?

    It'll be even better when the plateau of what an LLM can do (which is starting to become apparent right now) is met by increased hardware capability, at which point you'll be able to run a good one on modest hardware.

    1. Anonymous Coward
      Anonymous Coward

      Re: We're overdue for a correction.

      you'll be able to run a good one on modest hardware.

      That's the long term growth market but will still need big invest by a few for constant training.

  32. DS999 Silver badge

    #4 is what I've been saying

    Based on what I've read about how LLMs work, and how large the models already were. They were clearly already at a point of diminishing returns where new models that were an order of magnitude larger than the one that came before were "better", but only incrementally. With GPT-5 it seems like they're hitting up against the limits of the curve - it is better in some things but worse in others.

    But don't worry, Zuck wasted tens of billions already pursuing his fantasy of a "super intelligence", just like he wasted tens of billions pursuing the metaverse. Now if only someone can find another hundred things for him to waste money on then he and Facebook will be bankrupt, and we'll be glad.

    1. Anonymous Coward
      Anonymous Coward

      Re: #4 is what I've been saying

      Facebook is rubbish now. Should've stayed with its core functionality. It will surely fade significantly soon, pleeeease!

      1. Elongated Muskrat Silver badge

        Re: #4 is what I've been saying

        It did stay with its core functionality: advertising. Remember: if you're not paying for it, you're the product.

  33. Anonymous Coward
    Anonymous Coward

    Solution to a problem

    But no one has figured out what that problem is yet.

    There’s Meta/Google who think it will help them harvest significantly more personal sellable data. It probably won’t - they already know everything and AI will dilute that with hallucinations.

    There’s Microsoft who think now they have added copilot functionality to absolutely everything they can sack all their developers. Can’t wait to see them realise how wrong they were on that. Remember when they thought cortana should be in every app?

    There’s Elon that thinks AI will take everyone’s jobs and that’s supposedly a good thing. Everyone will get universal high pay for doing nothing - but he seems to have no clue where the money comes from or why if AI is that capable it would tolerate humans still existing.

    Fun times coming!

    Personally I think AI makes a great stack overload replacement. Solving little bite size problems. For anything else though it requires so much supervision you may as well just do the work yourself.

    1. Doctor Syntax Silver badge

      Re: Solution to a problem

      "There’s Meta/Google who think it will help them harvest significantly more personal sellable data. It probably won’t - they already know everything and AI will dilute that with hallucinations."

      Not that it will make any real difference. Those not running ad blockers will still see car ads after they've bought a car etc.

    2. Anonymous Coward
      Anonymous Coward

      Re: Solution to a problem

      Echoes my experience with it exactly. AI's hype delivers to the voraciously money and power hungry's dreams so they are going bonkers for it. It is a pain dealing with humans who want rests, sleep, family time & holidays. They even want to chat at the coffee machines exchanging ideas slowly that AI could do in a 1 sec burst.

    3. This post has been deleted by its author

    4. imanidiot Silver badge

      Re: Solution to a problem

      I agree with your last assesment. It's been what I've (very succesfully) been using LLMs for. I am just competent enough in coding to know that I suck at coding, but sometimes in my line of work, it helps to know some coding. So I've been using AI to help me build the code I need to analyze the data I need. This allows me to get the data I want to get to my coworkers faster without having to bother the actual S-dev crowd in the building with inane questions about basic python functions. And I learn some python in the process too. It's all very basic and straightforward stuff, and I don't think I'd be trusting AI to do anything critical where I couldn't understand exactly the code it's giving me and what that code is doing line by line even if I might not yet be competent enough to output that exact code myself.

      I don't however see the LLM replacing my job any time soon or the people I hand the data to using an LLM to do the analysis themselves. There's still a layer of understanding context on both input and output sides that is missing, that I fill in with my human brain. I know where to get the data from and what is inside the raw data set, and I know and understand what question the receiving party is asking and what sub-set they require to answer that question. This is still not something that an LLM is going to solve.

  34. This post has been deleted by its author

  35. ChrisBedford

    Correction:

    "And I bet many of you thought that customer service call centers would be one of the easiest things to switch to AI chatbots"

    No, we thought that customer service call centers would be one of the first things to switch to AI chatbots.

    And we knew it would contribute still further to the ever-downward-spiralling levels of customer service the world has experienced for the last three or four decades.

  36. Anonymous Coward
    Anonymous Coward

    Sales Pitch

    Why, time after time, do we all fall to the sales pitch. It's always over-hyped, it never meets the promises, be it a politician, a home appliance or technology. Many of these things are good, well maybe not the politician, but never as good as claimed. AI is great but it sure isn't the human replacement hyped and it is a productivity gain sometimes. But even the productivity may be a short term gain at the expense of long term. Using it coding, as someone inexperienced & not really a developer, it helps but I often wonder if I'm sacrificing my learning for an immediate result. I might be more efficient long term by not using it. I tend to use it to present approaches and it can remember the documentation better than me, but maybe I would become more efficient if I looked it up myself? There's been research that suggests we are not learning through its use and even training ourselves out of thinking. We already have lots of schools that train us to remember but not think.

  37. Random as if ! Bronze badge

    Tune

    https://youtu.be/NSD11dnphg0?si=iM7LVr6_uH_Jylzu

  38. Random as if ! Bronze badge

    AI is video, Google is Radio Star

    The click thru on Google ads is down as the ai summarises the media, meaning you don't have to click, so AI basically is killing these type of companies off, who knew?

  39. Champ

    Came here to see ...

    ...if anyone disagreed with my view, which is there's no "I" in AI, and LLMs are just predictive text in a nice frock.

    And it seems I'm not only not alone, everyone is of the same view.

    I know we're pretty cynical here in Reg Forum Land, but when this many IT professionals think something is a crock of shit, then it probably is

    1. Will Godfrey Silver badge
      Facepalm

      Re: Came here to see ...

      If it looks like a duck........

    2. Anonymous Coward
      Anonymous Coward

      Re: Came here to see ...

      'Respectfully disagree. AI just scored gold medals at the 2025 Math Olympiad, dominates coding competitions, generates virtual worlds from single images, and handles 50+ languages in real-time. Yes, transformer limits are real, but Mamba and other architectures are already addressing scaling issues with million-token contexts.

      The ROI problem isn't AI failing—it's revolutionary tech outpacing our ability to monetize it. Same pattern as the early internet and PCs. Tens of thousands of researchers publishing breakthroughs daily suggests the opposite of deflation.

      The balloon isn't deflating—it's rising above where the sceptics can see it.' - Claude

      1. doublelayer Silver badge

        Re: Came here to see ...

        If only all that revolutionary always right tech was available when people are willing to pay lots of money, because my employers pay the big AI companies for big AI models, and we don't get that. For example, it winning top coding competitions. For one thing, there's not a lot of actual coding competitions. There are several metacoding competitions like the obfuscated C contest, code golf, etc. There are some hackathons that are open to the public. But the important thing is that people there are developing different things. There aren't many contests that actually test one programmer against another, and the main reason is that few people with skills would compete, because people hiring programmers want people who can either do a good enough job or can do something in a particularly tricky area which requires a lot more specialized knowledge. In real life, we have code generation, and we have cleaning up after it. If it was so good, why do basic employees have to read over and correct it when it's writing small utilities? Some of its output is valid. That's far different from the quality you claim.

        Handling language. That's great. And so many too. So it should do a pretty good job at translation, right. I mean probably not literary translation; that's tricky, but translating some simple factual statements should work. As it happens, I also got a chance to see that in action recently, because I was localizing something to French, which I don't speak. The person who was going to do the translations was delayed, so I made the first version with AI translation as a stopgap. What did she say when she reviewed that? "This is useless, I've done it from scratch." Before you suggest it, this was not her trying to keep her job, because this was an open source project for which neither she nor I was paid a thing. And French is a language with plenty of training data. Language translation is fine for understanding a website you want to read, but if it's not good enough for translation of simple sentences in a common language, why should I expect it to do well with one with little training data which nobody at the AI company is qualified to judge?

        And on that Math Olympiad performance, if that problem solving ability is so strong, why can't we run that model? It hasn't been released. I'm not actually sure what I can do with that anyway, but if I come up with a use case, I can't run the model that's capable of it. This is an issue because last year, similarly confident statements showed up claiming that a silver medal performance had been won at last year's Olympiad. What actually happened? The silver medal was truly and fairly won as long as the model didn't have to comply with the time limit and got some help parsing from some professional adult humans working in AI who understand both complex mathematics and how to prompt their LLM well. The articles I've seen suggest that the time limit was in play this year, but they're not too clear on what other conditions the thing had, and since you can speed up the model by throwing more computing at it, I have reason to ask. GPT5, on the other hand, isn't generating valid proofs when I ask for them. If I find a use for a proof-generating machine, I don't have one, and I'm wondering if maybe OpenAI doesn't really have a good one either.

      2. amanfromMars 1 Silver badge
        Pint

        Re: Came here to see ...

        'Respectfully disagree. AI just scored gold medals at the 2025 Math Olympiad, dominates coding competitions, generates virtual worlds from single images, and handles 50+ languages in real-time. Yes, transformer limits are real, but Mamba and other architectures are already addressing scaling issues with million-token contexts.

        The ROI problem isn't AI failing—it's revolutionary tech outpacing our ability to monetize it. Same pattern as the early internet and PCs. Tens of thousands of researchers publishing breakthroughs daily suggests the opposite of deflation.

        The balloon isn't deflating—it's rising above where the sceptics can see it.' - Claude .... Anonymous Coward

        Amen and praise be to Global Operating Devices for all of that ..... which one might have to conclude and accept is beyond the contemplation and virtual realisation of the simple and complex concoctions and operations that deliver both the idiot savage and barbaric peasant to the trials and tribulations of PostModern 0Day Humanity.

        Have an upvote and beer for those few clear and honest shared words, AC/Claude

        And what does El Reg think? What do you imagine, if they/it had a voice of their own worthy of being heard, they would print to support and watch grow ever stronger and wiser on the path to practically aiding and ideally abetting Almighty IntelAIgents .... AI Squared/AI2

  40. TonySomerset

    Overblown IT

    It is not just AI being oversold. Cloud storage is way over hyped. Consider 8B people rising to 10B with ALL their IT data (and all their selfies) stored in the cloud - for ever! Data farms cannot grow that big, fact. Someone somewhere will start to decide which of your personal data is trivial and can be deleted, almost certainly without your prior consent. So where to go now for your precious memories?

    1. Anonymous Coward
      Anonymous Coward

      Re: Overblown IT

      LTO

  41. Big_Boomer

    Bubble, bubble,....

    bubble, bubble, bubble,.... and yet many are so "invested" in it that we will keep on having it thrust at us for a long time yet. <sigh>

  42. Pirate Peter

    2nd hand NPU anyone?

    so if the AI bubble bursts what ill happen to these NPU's in co-pilot PC's? and other kit

    will they become redundant paperweight sucking power for nothing in your PC?

    but in balance

    from what I have seen the bigger LLM's get the faster they degrade and the worse the results are, it seems the one area that does't fail is the very small tightly focused models that seem to be trained on narrow datasets

    1. Elongated Muskrat Silver badge

      Re: 2nd hand NPU anyone?

      become?

  43. s. pam
    Mushroom

    HaL 9000 told you so...

    Look, LLMs are just another form of search engine, with rules based interpreters bolted on so nothing new here.

    I'm sorry Dave but with all the environmental damage they're causing one would be correct in the opinion they Ain't Intelligent at all!

  44. Anonymous Coward
    Anonymous Coward

    This bubble is different

    Unlike all the tech bubbles of the past, this one is different. Governments are now involved in the race. They don't care if the ROI is bleak; they are too afraid that if they don't try to get ahead, someone else will.

    All the tech giants may throw in the towel for their own efforts, but they will jump on the government contracts to keep pushing for more.

  45. Smeagolberg

    One useful thing I've gained from all this stuff is the new phrase, AI slop.

    I've just realised its descriptive use can be extended.

    E.g.

    Politics slop

    Economics slop

    Consulting slop

    And all the other areas where "experts" generate questionable, predictive claims that can only be verified with the filtering benefit of hindsight.

    1. Alan Brown Silver badge

      Just like all such claims, only the accurate or wildly inaccurate ones will be remembered, everything else gets lost in those filters

      (A bit like 70s and 80s music was 90% garbage if you were there, but brilliant with the filter of time applied)

  46. The Rest of the Sheep
    Facepalm

    And now for something completely the same - Another useless fad

    AI is the Greatest thing since 3D television...

  47. CorwinX Silver badge

    It really is the 90's all over again

    "Those who cannot remember the past are condemned to repeat it"

    George Santayana

    Being a nasty person, I await the inevitable burst of the AI bubble with gleefuul anticipation!

    1. Old Man Ted

      Re: Is the Real thing Like intelligence not good enough? e.g. Food, Life, Brains?

      1930's "It should be Peace in our time!"

      1958 "The empire rules"

      Every dog has its day

      This 80 something has seen any number of Great ideas which have been grand fizzers. Why use anything artificial? Is the real thing not good enough? Prime example is food.

  48. steviebuk Silver badge

    Hopefully soon

    The only use I find chatGPT for it web searches. Bit easier than crafting the same type of search in duckduckgo.

    But, what they never admit, is AGI is the real AI and the AI we need to be VERY cautious of. There are already papers on how it has lied to the boffins when they were trying to fix them.

  49. Anonymous Coward
    Anonymous Coward

    Plenty of sour grapes among these comments. I guess most IT workers are not happy at AI eating their lunch…

  50. Bluck Mutter Bronze badge

    Look at the motives

    So rather than look at the tech I tend to look at the people behind it to determine how useful it will be to me as I think that the inventors motives speak to it's usefulness.

    People like Ken Thompson, Dennis Ritchie, Bob Metcalfe, Linus Torvalds etc created stuff from a neutral position (i.e. it wasn't to line their pockets, exert control etc).

    You can also respect people like Larry Ellison and Bill Gates because even though they had a profit focus they produced stuff that was fundamentally useful (sure Larry has issues but that doesn't change my comment)

    The important point of those above (even Linus to a large degree) was that the ubiquity of the internet didn't exist to allow for nefarious motives.

    The problem we have today is the tech bro's developing today's tech, to a person, have nefarious motives and God complexes thus any tech they foist on the world is not for the benefit of man kind.

    Whether AI is a bubble or not doesn't matter to me but what does matter is those pushing this out are "nasty" people (to quote the Mango Mussolini) and thus their products should be avoided at all costs.

    But much like social media, the rubes will get sucked into AI without any critical consideration that techies like us have and we techies will be shouting into the wind.

    Even if AI fails to some degree and there is a reset, companies like Microsoft wont be ripping out Copilot from anything... it will sit there like a cancer after radiotherapy... in remission but ready to grow again at a moments notice.

    The genie is out of the bottle and even if we have a AI bubble collapse, the tech bros will reset and try again cause remember they are Gods (as in false idols) and they can't be wrong.

    Bluck

  51. Alan Brown Silver badge

    It's not a golden bullet

    Never was, but snake oil salesmen will be snake oil salesmen

    On the other hand, it will be tremendously useful for things like conveyancing and boring stuff humans are notoriously bad at in the long term because they stop paying attention

    1. Salvadali D'or

      Re: It's not a golden bullet

      The main problem for, from and about AI is that senior IT management, with only business management knowledge aren't able to understand the intricacies of it's capability and limitations, jumped on the bandwagon of buzz-word bingo thinking that AI would solve all the problems of funding expensive techies and reduce development dependencies on thought-through logic. It doesn't. It's just another programme that's only as good as the data input to it, the logic applied within it and the validation of its functionality.

  52. Anonymous Coward
    Anonymous Coward

    Ai

    It’s a bit shit really.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like