back to article AI running out of juice despite Microsoft's hard squeezing

I am so sick and tired of AI hype. I'm not the only one. Yeah, with the release of ChatGPT-3.5 in November 2022, AI became a huge deal. Now, though, the AI revolution, once heralded as the next big leap for businesses worldwide, is facing a sobering reality check. Recent data reveals a marked slowdown in AI adoption. The …

  1. Jou (Mxyzptlk) Silver badge

    nobody who knows how it actually works is surprised

    1. AI: Picture recognition

    2. AI: Picture remix

    3. AI: Text remix

    It is all a simply pattern-recognition-remix thing. The AI part in the existing pattern recognition is more like that existing tech got renamed, a bit refined, and the computing power did the rest. It is still tech from the late-80's early-90's.

    The most annoying thing about that hype bubble is that it shadows the actual good neural network usages, optimization in engineering, cancer recognition and so on.

    1. IamAProton
      Trollface

      Re: nobody who knows how it actually works is surprised

      I see improvements in copilot (as a non-registered user): months ago before making it swear to Dog I had work around it for a bit, now most of the times it never eve bother withe the usual boilerplate "I see you are frustrated ...", just basically asks me to teach it more "local expressions" ;-)

      1. Anonymous Coward
        Anonymous Coward

        Re: nobody who knows how it actually works is surprised

        Did you write this comment with AI?

    2. Decay

      Re: nobody who knows how it actually works is surprised

      IBM about a decade ago used Watson to ingest publications (or medical equivalent of white papers) on cancer treatments, ingest all the current of the time research etc. and then IBM had Watson review the medical notes of cancer victims in Japan who have some of the best oncology people and hospitals in the world. Apparently Watson was able to offer lines of treatment to those patients that had never occurred to the oncologists because as a human with a day job he or she could not possibly keep up with all the current research. Watson could read and "understand" a far greater swath of data but importantly this was peer reviewed data, stuff that had been fact checked etc.

      Interesting use case, then IBM went and tried to release it early and fell flat on their face https://www.statnews.com/2017/09/05/watson-ibm-cancer/

      But it does offer a good insight to one possible use case. Flagging useful data hidden in a sea of data. As long as the sea of data is somewhat curated and validated I could see potential use cases.

      1. Jou (Mxyzptlk) Silver badge

        Re: nobody who knows how it actually works is surprised

        And IBM took care to feed good data into the system instead of scraping all best slurs, extremes and conspiracies from reddit and 4chan/9chan.

        1. ecofeco Silver badge

          Re: nobody who knows how it actually works is surprised

          Yep, big difference right there.

        2. Pascal Monett Silver badge

          If I'm not mistaken, Watson was still of the "excpert system" generation, and did not use pseudo-hallucinating-AI.

          The fact that Watson failed is because, well, it was IBM.

          1. captain veg Silver badge

            Fail

            How many ad impressions did it generate?

            -A.

        3. masterofninja

          Re: nobody who knows how it actually works is surprised

          And this is what it comes down to - garbage in, garbage out

          I'm very sceptical too about this "AI revolution" and all the crazy valuations of companies. I cannot even work out how they are justified. On a whim I decided to ask how long it will take OpenAI to make a profit. Perplexity gave some wooly vague answer. A summary line from ChatGPT mentions OpenAI will have "... cumulative losses projected to reach $44 billion by 2028". Without massive increases in revenue it just doesn't work. I know European companies are worried about missing the AI train, however perhaps second mover advantage is better, with less costs a la DeepSeek. Or even better when copyright is removed from everything in the US and people can just copy ChatGPT straight off.

          Going back to the point Watson seems to have worked because its data input was clean. What IBM did was not promote it correctly, however perhaps that is in contrast to the current generation of overhyping everything, and creating bubbles. What we should be using the popular current AI models for is the initial research, because some of their claims needs further investigation, like how I was always told to use an encyclopaedia - initial research, then deep dive.

          Saying that I would also be sceptical an AI summary of science literature, as a lot of it produced nowadays is garbage, or tainted with such statistical alchemy that the apparent results don't hold up.

      2. Georgski

        Re: nobody who knows how it actually works is surprised

        > But it does offer a good insight to one possible use case. Flagging useful data hidden in a sea of data.

        Categorisation problem. I have heard of a scientist using it like this. They had a standard question like "Does this paper discuss the impact of X on Y" (or whatever).

        A machine can run this across thousands of papers and winnow the whole set down to a readable pile. It won't get it perfectly right and you might miss an essential paper, but you'd probably miss it anyway as you have time to read only a fraction of them.

        You could see the same in legal discovery.

        1. m4r35n357 Silver badge

          Re: nobody who knows how it actually works is surprised

          But a system designed specifically with those functions in mind will be far more efficient.

      3. ZOS

        Re: nobody who knows how it actually works is surprised

        BBC Horizon: Now the chips are down... particularly at the 35 minute mark...

        https://archive.org/details/BBCHorizon19771978NowTheChipsAreDown

    3. mcswell

      Re: nobody who knows how it actually works is surprised

      " it shadows the actual good neural network usages... [such as] cancer recognition". A few years ago, someone made the claim that it was no use training radiologists, since their work would soon be taken over by AI. That of course has not happened. Are there places where AI is currently being routinely (not experimentally) used in cancer recognition?

      1. Jou (Mxyzptlk) Silver badge

        Re: nobody who knows how it actually works is surprised

        I can only speak for Germany: Yes there are places using trained neural networks for that purpose. But it is not wide spread, seen as supplemental. It helps getting masses through. The adoption rate is rising since they got it down to smartphone-level.

        About 2019 the first larger news spread about neural networks (by that time nobody said AI, or KI as the German abbreviation) and their support in cancer. And as for "someone made the claim" I never read about that. Nobody serious could have claimed that, and nobody serious would have believed such claim.

  2. kmorwath

    I have a colleague who wants to spend over 40K in AI hardware...

    .... so he can get his own AI to summarize and query documents/manuals so he doesn't have to read them - something he is also paid for. He fails to understand that often reaading documentation tells you things you didn't know where there, or that could be done, while quertying just return you things you look for only. And if it does report wrong info, you might discover when it's too late. But hey, he's a sysadmin so being lazy is part of his job.

    1. MatthewSt Silver badge

      Re: I have a colleague who wants to spend over 40K in AI hardware...

      There's a very fine line between laziness and efficiency...

      1. Anonymous Coward
        Anonymous Coward

        Re: I have a colleague who wants to spend over 40K in AI hardware...

        Laziness was definitely my superpower.

        From laziness flowed simplicity, low cost, actually working, being completed.

      2. CrazyOldCatMan Silver badge

        Re: I have a colleague who wants to spend over 40K in AI hardware...

        There's a very fine line between laziness and efficiency...

        In the past, when moving to a new orkplace, especially one where I was senior or sole techie, I spend the first 6 months analysing the systems then the next month automating stuff as much as possible so that my job only took half the time. The rest I could spend staring at a green-screen terminal, browsing my little corner of Usenet.

        I called it "creative laziness"

    2. Pascal Monett Silver badge
      Windows

      Re: being lazy is part of his job

      I'm a programmer. Project manager, senior developer, 25 years of experience in my specific domain.

      And yes, I'm lazy as fuck. But that means that I'm going to test my code in every concievable configuration to ensure that it does its job right.

      The lazy part comes after, when I just have to select the data set and push a button to get a report.

      But before I let that button go into production, you can be sure that I have worked my ass off to make sure that it will respond properly to every concievable case.

      But yeah, I'm lazy as fuck.

      1. Steve Aubrey

        Re: being lazy is part of his job

        The three virtues of a programmer (from Larry Wall, via https://thethreevirtues.com/)

        Laziness - The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful and document what you wrote so you don't have to answer so many questions about it.

        Impatience - The anger you feel when the computer is being lazy. This makes you write programs that don't just react to your needs, but actually anticipate them. Or at least pretend to.

        Hubris - The quality that makes you write (and maintain) programs that other people won't want to say bad things about.

    3. ComputerSays_noAbsolutelyNo Silver badge

      Re: I have a colleague who wants to spend over 40K in AI hardware...

      A lazy worker, who knows his stuff is valuable.

      A dumb lazy worker, on the other hand, ...

      1. Alumoi Silver badge

        Re: I have a colleague who wants to spend over 40K in AI hardware...

        is manager material.

    4. Jou (Mxyzptlk) Silver badge

      Re: I have a colleague who wants to spend over 40K in AI hardware...

      I am a sysadmin, not using any AI crap, and every time I try it fails so miserable. But of course, you pick that one example and generalize it across all. That is offensive, even by "German Directness" standards.

    5. Dimmer Silver badge

      Re: I have a colleague who wants to spend over 40K in AI hardware...

      Don’t know any lazy sysadmins. Know a lot of lazy users.

      Lazy sysadmins don’t last long due to the constant “feature” updates, intentional end of life and stuff just breaking on its own.

      Oh, and don’t forget the hackers and lazy users.

      1. Anonymous Coward
        Anonymous Coward

        Re: I have a colleague who wants to spend over 40K in AI hardware...

        I might define those as negligent.

        I'm lazy -- but if a security issue is announced that affects something that I manage, I'll be patching it ASAP -- or pushing the others who manage that piece to do so (AND following up with them on it). I don't want to have to deal with the fallout of that going bad -- I'm lazy... and I know that if they don't, or I don't, it will go badly.

  3. m4r35n357 Silver badge

    If only!

    The wheels remain on for the foreseeable future, unfortunately.

  4. cookiecutter

    What's the next boondoggle?

    Why does anyone listen to firms like gartner? How much money has been wasted by firms on AI that they'll never use or break even on.

    Gartners bloody hype cycle is the biggest joke..I was actually at a tech event the other week & they were talking about how AI is at the top of the hype cycle and then next sentence is how great AI is...

    Utter madness! The isn't a killer app. Copilot is worse than useless. Videos on AI search trying to convince you that a 20 minute conversation with AI is somehow better than a single line search

    And now on git there's an app called sidekick for mac which seems to be towards a locally run AI tool for free. So even that application isn't going to pay that $trillion odd they want to spend on nvidia hardware.

    Bets on what the next gartner hype doodad will be?

    1. b0llchit Silver badge
      Facepalm

      Re: What's the next boondoggle?

      Next hype candidates:

      • Fusion power
      • Space mining
      • Carrying buckets of water to the ocean
      • Quantum computing
      • Single photon memory storage
      • Anti-gravity waves
      • Hype for the Hype
      • War without War
      • Computing computers
      • Humanoid robots

      Just to name a few.

      1. Steve Davies 3 Silver badge
        Big Brother

        Re: What's the next boondoggle?

        To that list, Elongated Muskrat will claim that they are all HIS and HIS alone because he is a Genius.

        1. Adair Silver badge

          Re: What's the next boondoggle?

          Actually, he's just a very naughty boy.

      2. cookiecutter

        Re: What's the next boondoggle?

        I believe in fusion, imagine how far we'd be if the money burnt on openai was spent on that..France is at 22 minutes of power.

        Listen to Ed Zitrons podcast on this stuff..the numbers are insane. You could literally fix worldwide poverty with the money openai is looking to burn over the next couple of years

        1. Alan Brown Silver badge

          Re: What's the next boondoggle?

          "France is at 22 minutes of power"

          Not quite. The plasma has been maintained for 22 minutes, that's not the same as being over unity end to end (It's still impressive but practical Fusion is still as far off as it's ever been and those neutrons are problematic)

        2. Adair Silver badge

          Re: What's the next boondoggle?

          'imagine how far we'd be if the money burnt on openai was spent on' the fusion reactor we've already got; the output of which is freely available (no licence required) to anyone—the Sun.

          1. AbominableCodeman
            Joke

            Re: What's the next boondoggle?

            "the output of which is freely available (no licence required) to anyone—the Sun"

            I'm sure someone, somewhere is working on fixing that.

            1. Lomax
              Boffin

              Re: What's the next boondoggle?

              In Sweden you have to pay a tax on the electricity produced by your PV installation if the total output power exceeds 500kW - even if you consume all the electricity generated yourself. So a factory hoping to harvest free energy from the sun to cover its own electricity needs will still have to pay for every kWh produced.

      3. breakfast Silver badge

        Re: What's the next boondoggle?

        You can actually see the hype-cycle trying to build up Quantum as the next big bandwagon, but it has the limitation that right now it doesn't exist at all. Even LLMs give the impression of doing something smart. It's going to be hard to shift those billions of speculative investment into pure vapourware. Though honestly OpenAI is pretty much selling hot air as well - nothing they have promised over the last cycle or so seems to have been delivered.

      4. ComputerSays_noAbsolutelyNo Silver badge
        Coat

        Re: What's the next boondoggle?

        I can carry buckets.

        -> let me fetch my bucket carrying diploma

      5. Hooda Thunkett

        Re: What's the next boondoggle?

        You forgot quantum computing. How could you forget quantum computing?

    2. Tron Silver badge

      Re: What's the next boondoggle?

      Climate change-based drought is going to be a real problem. If anyone wants to invest at an early stage, I have revolutionised the dehydrated water market. Last chance - don't miss out. Some stock is still available for those with a few million to spare. Chicken feed compared to the costs of investing in AI.

    3. thames Silver badge

      Re: What's the next boondoggle?

      Perhaps they could replace Gartner with an AI.

      1. Pascal Monett Silver badge
        Trollface

        Hasn't that been done already ?

    4. Adair Silver badge

      Re: What's the next boondoggle?

      Gartner - always reliably six months behind the wave that everyone has already got off, or never bothered getting onto.

    5. Potemkine! Silver badge

      Re: What's the next boondoggle?

      You mean blockchain wasn't the killer technology everybody needed? And AI isn't the revolution everybody should use? I'm flabbergasted.

      /s

  5. Anonymous Coward
    Anonymous Coward

    MS AI running out of steam?

    Why don't you hire Elon the Almighty? Then he'll fire 90% of your workforce and claim $100B per year in savings.

    Your SEC reports will look fantastic for 1 or 2 quarters then you will get hit with the realisation that AI is not the best thing since sliced bread. Don't worry, Elon can sell you some obedient robots. No more water cooler conversations.... 24 hours of work a day.

    The above text was written by a human. No AI in sight and is intended as a bit of sarcasm. Personally, I wish MS would fire itself. The world would be a much better place without them.

    1. Alan Brown Silver badge

      Re: MS AI running out of steam?

      "then you will get hit with the realisation that AI is not the best thing since sliced bread"

      and then 2-5 years later we'll start seeing quiet announcements of things being facilitated by AI

      It's a tool, not a universal panacea. It's already useful but people want flying cars so it's easy to sell hype hype for that. Remember: People once brushed their teeth with radium toothpaste because it was the new cure-all

    2. captain veg Silver badge

      Re: MS AI running out of steam?

      I've never wondered before now, but what is the worst thing since sliced bread?

      I quite like unsliced bread?

      -A.

    3. Lomax

      Re: MS AI running out of steam?

      > I wish MS would fire itself...

      ...out of a cannon. Into the sun.

    4. big_D Silver badge
      Coat

      Re: MS AI running out of steam?

      Yes, but do people want AI to slice their bread?

      Oh, go stick it up your nose!

      1. collinsl Silver badge
        Trollface

        Re: MS AI running out of steam?

        No room, too many deckchairs

    5. CrazyOldCatMan Silver badge

      Re: MS AI running out of steam?

      not the best thing since sliced bread

      Some of us are not impressed with sliced bread - particularly that horrible plastic Chorleywood process stuff that masquerades as bread in most of the shops.

      1. Jou (Mxyzptlk) Silver badge

        Re: MS AI running out of steam?

        Come to Germany. My town has recently been highlighted as "Bread-light-district" with 8 bakeries (excluding supermarkets and bakeries withing supermarkets) within an area of 500 meters (one-third mile). I guess you are USA, where most bread is like a plastic sponge, but less healthy. Though you CAN get good bread over there, as some Germans reported, but that is no simple task depending on where you live.

        1. Joseba4242

          Re: MS AI running out of steam?

          Love how the pun bread light district / Brotlichtviertel works in both English and German.

  6. Pulled Tea
    Holmes

    As much as I agree with the sentiment that AI is overhyped...

    ...I don't get why it's up to the businesses and workers to figure out what, exactly, are the business user cases for AI.

    Used to be back in the day when there was a new technology that was revolutionary, you'd have experts coming by and explaining to the enterprise exactly how and where the technology was supposed to be implemented if you wanted maximum returns.

    Just dumping the product in the middle of the office was like dumping an interpreter for a programming language on to the laps of an executive and expecting them to figure it out. Of course they wouldn't, and why should they? They're not the experts on this damn stuff, the supposed technology providers are.

    Like, aside from the fact that none of this stuff is reliable and actually good for the environment, of course dumping it on the laps of employees and expecting them to figure it out and experiment... first off, who's got the time to bloody “experiment”? Their department's budget got cut, their metrics have gone up, the labour market is shite, and their workforce has been cut 80% while pay has remained the same. And secondly... this doodad is meant to bloody replace them. Even if they've got the time, no one wants it, or at the very best no one's terribly enthusiastic about finding ways for the bosses to unlock more out of them.

    1. thames Silver badge

      Re: As much as I agree with the sentiment that AI is overhyped...

      Shouldn't the AI be able to figure out what the business use cases are for AI? After all, digesting huge masses of information about a problem, digesting it, and spitting out simple responses based on what others have done is exactly what AI is supposed to excel at. In other words, shouldn't business consultancies be just AIs?

      If an AI can't do that, then perhaps AI isn't all that it has been cracked up to be.

      1. Fonant

        Re: As much as I agree with the sentiment that AI is overhyped...

        perhaps AI isn't all that it has been cracked up to be.

        Spolier: it isn't. Nowhere near.

        1. This post has been deleted by its author

    2. Anonymous Coward
      Anonymous Coward

      Re: As much as I agree with the sentiment that AI is overhyped...

      Hmm, maybe because AI con men haven't figured it out yet how they can make big $$$ out of it?

    3. David Hicklin Silver badge

      Re: As much as I agree with the sentiment that AI is overhyped...

      I remember the early days of the IBM PC era where you would identify "we need something to do X" and then you would go an look for something that would do "X" (or a close to it as possible).

      All I saw of AI at my last job before early retirement got me out of the rat race was "please find something we can use this nice shiny and expensive thing for" which is absolutely bonkers

  7. Tron Silver badge

    They gave the AI scam their best shot, but...

    quote: As Microsoft CEO Satya Nadella recently observed... 10 billion bucks.

    That's like the Pope admitting that they don't really know if all that God and Heaven stuff really exists or not after all, but they are going to keep running with it anyway.

    AI will have niche value and may support some interface options. The evangelical stuff - universal adoption, gamechanging tech - is a scam. An enthusiastic scam because the last few - the Metaverse, NFTs etc - didn't go so well.

    They want to repeat their success with the cloud/subscription/SaaS scam, so this time they are giving it their all and forcing us to pay for it even if we don't want it.

    We do still operate on a capitalist basis, and forcing us to buy stuff we don't want, that doesn't work, costs us more than it makes and even has a misrepresentational name - it isn't really intelligent - was never going to go well.

    MS have spent years polluting their own ecosystem with duff updates, the withdrawal of options, endless restrictions, terrible OS versions, and the misery of subscription services. Forcing AI on users the way they tried to force W10 downloads on us, may well be the straw that breaks the camel's back. No matter how much of a pain in the arse Linux and the Linux community are, there are just too many reasons now to avoid MS.

    1. navarac Silver badge

      Re: They gave the AI scam their best shot, but...

      >> Forcing AI on users the way they tried to force W10 downloads on us, may well be the straw that breaks the camel's back. No matter how much of a pain in the arse Linux and the Linux community are, there are just too many reasons now to avoid MS. <<

      This camel's back broke at the start of 2020. I may be a pain in the arse, but this Linux user just gets on with stuff, and does not have to put up with Microshit. I do, however, chuckle at the crap Microsoft still throws at users.

      So sorry for you lot that just HAVE to stay in the Windows eco-system. You are being shafted, and you know it.

    2. captain veg Silver badge

      Re: They gave the AI scam their best shot, but...

      You have to wonder if this isn't some humungous psychological experiment. Just how far can we drive them before they bite back?

      -A.

      1. Alumoi Silver badge

        Re: They gave the AI scam their best shot, but...

        Naah, that's politics.

    3. Anonymous Coward
      Anonymous Coward

      Re: They gave the AI scam their best shot, but...

      Look at it from the "corporate leadership" perspective.

      "AI" is now real - and clearly passes the Turing test -- the measurement bar for AI. (It's not, and it's not, and it doesn't, but alas, it looks like it does to the ill-informed.)

      You have the possibility of computers learning everything that your workers are doing, and being able to do that mindless, repetitive "stuff" without any programmers required to figure out and code that stuff. You can replace expensive, distractable, error-prone people that make mistakes and have sick days and need offices and ..... with computers that will do it repetitively, repeatably, intangibly, for a pittance of an investment-per-unit-thing.

      As a corporate leader, if you ignore such an advertisement - you're a flat out dumbass. These companies *can't* ignore this, it would be negligent on their part.

      Consider what they know: the above advertisement, with "new technology," "still developing," "shows promise," "Look at the grammatically correct sentences that are generated," "hallucinated - but we've worked those out now," -- all of the advancements in a short period of the technology. When you *don't* understand how it works, and *haven't* tried to put it to use - it *seems* incredible (with credibility). This means that you can cut your labor cost by 2/3, which is the vast majority of your corporate expense, and replace it with a few tens of thousands of dollars of perhaps one-time purchase -- becoming more responsive to the market, more able to adopt new technologies, more creative in workflows, and you won't need any of these *super* expensive technical type employees at all: just tell an AI what to do and let it do it.

      It's just too attractive of an opportunity to pass up. The potential gains are nothing short of radical, and the losses... well, everyone's doing it: you're justified in whatever you invest.

      That's what all of this corporate investment is: not understanding the technology, but seeing its advancement. Despite the nay-sayers -- who, really, are employees whose jobs are threatened (corporate leadership perspective) -- its promise is clearly visible when you sit down with it for a few minutes. (IT perspective: and not more than a few minutes.) Add to that that the huge tech co's are all investing it it to bring it to pass, and how can it *not* become a booming solution to all the problems of corporate greed?

      You would really have to have some idea of how this works to dismiss it as the technologists are. C-level execs don't have that.

      1. Anonymous Coward
        Anonymous Coward

        Re: They gave the AI scam their best shot, but...

        I suspect a lot of it is because a recent LLM can indeed do quite a lot of what the top-level management actually do each day. Summarising reports (incorrectly) and writing fluffy text is sufficient quite a lot of the time, because at that level it often doesn't matter whether it's true - vibes are both necessary and sufficient.

        So having seen that, a fair few of them assume that because it did what the CEO wanted yesterday, it'll still do that next year (it won't), and that it can do what the actual workers do.

        Which it cannot, because their tasks need to be performed accurately and with precision - vague statements and 'vision' are not enough. Hallucinations at the coalface kill people.

        1. Tron Silver badge

          Re: They gave the AI scam their best shot, but...

          Good point. AI may be able to replace management and politicians without anyone noticing.

          Of course that doesn't mean it is intelligent, capable or should be trusted with anything important.

          It cannot replace people with real skills and expertise or anyone who actually does any work.

          1. DrkShadow

            Re: They gave the AI scam their best shot, but...

            > Of course that doesn't mean it is intelligent, capable or should be trusted with anything important.

            The management, the politicians, or the AI?...

            1. Anonymous Coward
              Anonymous Coward

              Re: They gave the AI scam their best shot, but...

              Yes

      2. Justthefacts Silver badge

        Re: They gave the AI scam their best shot, but...

        It *does* pass the Turing test, rather easily. But, it turns out that metric of “how good are you at giving plausible but incorrect impression” is not a good one…..

        1. Andrew Scott Bronze badge

          Re: They gave the AI scam their best shot, but...

          maybe that's the problem. being able to be indistinguishable from a human at the other end of text conversation doesn't require the truth of factual information to be relayed. Ask an llm what it had for breakfast and if it tells the truth you may have an answer if the respondent is human or not assuming it doesn't lie. other questions might get you closer to the "truth". If an llm can pass the turing test for all possible questions then it's been trained or required to lie.

    4. Lomax

      Re: They gave the AI scam their best shot, but...

      I quite liked Windows 7. Wouldn't mind using it still if MS kept it patched. The Win 8 EULA, and the atrocious Metro UI, pushed me to switch to Linux. Never looked back. Today I only run Windows as a VM, for testing purposes.

  8. Doctor Syntax Silver badge

    "Yes, it can be helpful when used carefully as a tool"

    Exactly what is that use for which it could be a helpful tool?

    1. captain veg Silver badge

      The word "tool" has various interpretations.

      -A.

  9. Bebu sa Ware
    Coat

    "I'll wake you up when we start climbing the Slope of Enlightenment."

    Much appreciated although I shouldn't imagine that would be much before the Last Trump.

  10. Locomotion69 Bronze badge
    Coat

    There is no killer app. All these companies are so desperate on deploying AI on ...whatever... that the actual problem to be solved appears to be lost.

    So the obvious AI response must be 42. Now let us aim for the question.

    1. captain veg Silver badge

      Let us aim for the question

      It is the great question, of life, the universe and everything.

      -A.

  11. Fonant

    LLM "AI" is simply bullshit generation

    "AI" is very useful for everyone who needs some plausible (might not be true) text or images or video.

    "AI" is completely useless for anyone who wants outputs to be accurate and/or true.

    "ChatGPT iS Bullshit": https://link.springer.com/article/10.1007/s10676-024-09775-5

  12. AndrewTR

    "...We have a perfect ouroboros of AI-driven pointless work since DOGE is believed to use AI to read these messages."

    Thank you for this. It made my day.

    1. HuBo Silver badge
      Alien

      Me too!

      Two things to add: 1) the paper underlying that "the bigger, "better" LLMs tend to be the ones that deliver the worst answers" (linked under "Nature study") is openAccess at Larger and more instructable language models become less reliable -- where we see issues with so-caled AI scaling laws.

      2) AI girlfriends (linked under "AI sexting") is lovely! With "an impressive surge in investments" valued at just under $10B by 2028. It could well be the AI killer-app, especially if one's AI girlfriend is also an AI serial killer! (or maybe a suicide enticer ... unfortunately way too real)

      It seems overall that AI may be to intelligence as Soylent is to gastronomy, "a bleak message [...] that exists because people [...] have been failed so badly by their education system". Juicing it dry won't solve the root problem ...

      1. Richard 12 Silver badge

        Porn usually is

        That and gambling generally do very well with new technology.

        1. Decay

          Re: Porn usually is

          Agreed, people forget how much advancement was driven by porn, gambling and finance. Now there is a Venn diagram with a lot of overlap :) Porn drove micropayments, increased bandwidth long before streaming or napster were even a thought bubble. Finance tended to be more enterprise level, think Bloomberg dark fiber, millisecond trading etc. Porn got the micropayments world going and set up a lot of the frameworks between personal online credit card purchases and banking, gambling followed suit but with a few more zero's in the budget to get it across the line and legitimized. Banks were strangely reticent to be involved in porn related activities but were fine with gambling, go figure.

  13. Rich 2 Silver badge

    AI isn’t

    The problem with “AI” (No such thing) is that the current crop of hype is actually all about LLMs.

    LLMs have nothing at all to do with AI. They are basically clever search algorithms that can stitch together stuff that they find. That’s all they do.

    That in itself is reasonably clever but there are limited applications for this - which is the chicken that is coming home to roost as more and more people are realising this.

    What is being sold is in no way AI. It is a search algorithm. It is being mis-sold and people are quickly realising this when they find that it can’t actually do anything that one may expect of a true AI system.

    I am very pleased the bubble is starting to burst now rather than in a few years time.

    1. sarusa Silver badge

      Re: AI isn’t

      This at least (?) is a decades old problem. Any program that uses if statements is technically 'Artificial Intelligence' in the most reductive definition - if I write 'if ( input_a == 1 ) { do_this(); } else { do_that(); }' it is now making a decision and is artificially simulating intelligence, so I could now sell it for $1B as AI. And this has been happily abused to sell things since at least the 1950s. Like someone wrote a checker playing program in 1951 and that was 'Artificial Intelligence'! There have been lots of commercial 'Artificial Intelligence' packages that were going to replace programmers This Time For Sure.

      Which is why we have the term AGI (Artificial General Intelligence) to mean 'no this time it's really thinking and self aware we swear' and I'm sure that won't be abused for marketing *koff*.

      But when dumbass consumers (and managers) see 'AI' they think it's 'AGI' so it does help to keep reminding them it's not.

    2. David Hicklin Silver badge

      Re: AI isn’t

      But marketing "LLM" is not as sexy as marketing "AI"

    3. veti Silver badge

      Re: AI isn’t

      Meh. I'm still waiting for someone to explain how, if you exclude all the input from the physical body, human thought is any different from an LLM.

      1. Dan 55 Silver badge

        Re: AI isn’t

        Would human thought pause while waiting for input from a prompt?

      2. sarusa Silver badge

        Re: AI isn’t

        > Meh. I'm still waiting for someone to explain how, if you exclude all the input from the physical body, human thought is any different from an LLM.

        Well for one thing it's a lot more random. LLMs are extremely predictable if you start with the same random seed for choosing the next token.

        And then the human brain has far more connections operating simultaneously and there seem to be some quantum effects going on. Nobody really understands how it works.

        An LLM is basically a sad-ass emulation of a brain with the little we understand. It's certainly a giant step up from a neural network, but at best it can produce similar results to some thing while using a trillion times more energy and 1000x more time.

        Aaaand, a human brain can be self aware. An LLM can't. Ever. It's just a stochastic parrot where all the secret sauce is in the training. If you want AGI you will need some other text. I know, I know, keep trying to prove that wrong.

    4. Lomax

      Re: AI isn’t

      Someone (here?) referred to them as "stochastic parrots" - an expression that stuck with me.

  14. tekHedd

    Working As Designed

    "AI continues to deliver plausible, but wrong, answers to questions."

    Working as designed. #wontfix

    ALL output from Generative AI is hallucination. "AI accuracy" is (human) confirmation bias. (..confirmation bias often makes this mistake, where probabilities are involved, see also gambling.)

    ^ Anyone who says otherwise is scamming you.

    1. veti Silver badge

      Re: Working As Designed

      I can ask Perplexity "is (local cafe) open right now?", and it will tell me. Correctly.

      I can ask it "what is the largest eagle", and it gives me a decent answer - four choices, depending how you want to define "largest" (and "is") - set out in a format I can clearly read within a few seconds.

      This is not confirmation bias, this is simply asking questions that the system knows how to answer.

  15. Henry Wertz 1 Gold badge

    My experience with AI

    So, specialized models used for specific purposes can be good. AI isn't TOTALLY useless. But, it's been vastly overhyped. In terms of just sticking some LLM somewhere to use, here's a sample of my experience:

    I called Verizon, the AI thing they now have replacing their conventional voice prompt system, instead of answering the question you ask it just tells you where on the web site you can find the information. Usually gives you info on the wrong question (instead of just admitting it doesn't have info on the topic). And tries to resist giving your call to a human much more than the previous system.

    I took a photo of some eggs in water in New Orleans and asked Gemini to identify them. It thought they were some kind of fish eggs, but asked for a location to narrow it down. I told it New Orleans. It said they were mosquito eggs... reasoning? All eggs in New Orleans are mosquito eggs.

    A while back I did have a local copy of DeepSeek (quantized, I don't have 600GB RAM in my desktop LOL) write a bit of code. It was OK I guess. Although I've also seen it make non-running code often enough that I would NOT have these tools just spit something out and use it.

    I also started grilling DeepSeek about what restrictions it had and I finally had it say it had restrictions but was unable to list them. I asked it if it was unable to list them because there was a rule saying it couldn't list them, or if it was unable because the rules were implementing in a way it was unaware of what they were. It had a full blown existential crisis, burned through 20 minutes of think time (my system doesn't run this model TOO quickly, probably a paragraph a minute.. but stilll) printing out paragraph after paragraph on the nature of awareness. Finally it crashed -- I don't know if it crashed crashed, or if LM Studio just has a timeout (assumed it was in an infinite loop?) and dropped the hammer on it. I didn't read through this thing to see how coherent it was but I did skim it to see it didn't start looping, repeating itself, and it didn't devolve into spitting out word salad, it was still in the middle of a regular English sentence when it crashed.

    Recently, I wanted to know if the engine in my car was cast iron, aluminum, or iron block/aluminum head. I mistakenly asked if the Cruze 1.4L turbo engine had was steel or aluminum.. (I put steel instead of cast iron), and Google's AI response went on about how it had a steel engine block... well, definitely wrong, it's either going to be cast iron or aluminum. Then when I asked cast iron or aluminum it assured me it was aluminum. The real answer is the one I have has a cast iron head and aluminum block. Confusingly they did switch to a different 1.4L engine later in this vehicles life, but Google didn't mention that, or specify which 1.4L engine it's info was for.

    The AI summaries Google gives, I quit even looking at them because probably a solid 30-40% of them are wrong even in the first sentence and others seem to get important details wrong just in those next couple sentences they spit out.

    Seperate from this, I also played with Stable Diffusion and watched it make creepy as all hell uncanny valley renderings of whatever, which were definite AI slop. A couple friends were over so we just gave it prompts and laughed at the results basically. I'm sure diffusion can work nicely, but tread lightly, it can also be quite bad.

    I did play with using some LLMs for sentiment analysis of text documents -- some models didn't give a consistent score (1 to 10 scale, it scores the same document a 5 one time, a 9 the next, a 7 the next...) but some did. Pattern recognition, pattern matching, data analysis, models can be quite good at it. Just to say it's not like they're totally useless, there are things they are OK to very good at.

    --------------

    So, yeah, no kidding I'm not going to rush out to implement AI. I imagine now that people can see Siri, Google Gemini (on it's own) and Google's AI summaries, and how pants they are, I imagine that might temper people's enthusiasm for just dropping in AI wherever.

    To me the most important feature or fix would be for LLMs to realize when they don't know the answer to something and not hallucinate (if it's being used for answering factual questions. Of course if you are asking it to be creative then I suppose "no hallucination" might block that creativity.) But, really, I just don't expect a model, no matter how good it is, to be an expert on everything, and is not a replacement for the expertise of literally everyone on the planet (especially if you go to those more obscure "rabbit hole" topics, like some people nerding on about a specific game, or old media, or computers of the 1950s, or whatever more specialized topic.) For instance, Carl Claunch has been refurbishing an IBM 1130 (up to and including reimplementing systems for long-term reliability.. for instance he has disk packs and a drive, but how reliable willl 50-70 year old disk packs be? So he's implemented compatible peripherals using Raspberry Pis and FPGAs.) Cool to read about and specialized, but I seriously doubt an AI can have that kind of knowledge.. I imagine it'd get some info right then assume things work how they did on post-1950s computers and have laughable errors. The idea that AI answers could entirely replace a search engine for answering questions is laughable.

    1. Richard 12 Silver badge
      Terminator

      Re: My experience with AI

      For a bit of fun, try to get an image generator to produce a wine glass that's full to the brim or overflowing.

      It's literally impossible - and why that is is most informative.

      1. Eaten Trifles

        Re: My experience with AI

        I tried that, couldn't even get close to full. Why is this then?

  16. Techknowit

    First and foremost. None of this is actually, "AI". The tech companies have done a great job using the term but none of it is.

    So when people make comments like,"Companies don't know how to use, "AI" it is just ridiculous.

    Outside of the LLM's, everything else discussed as AI are tools that have been around for 30 years such as ETL and RPA tools.

    Just because a program can make someone's emails sound better than they are actually able to write does not make it AI.

    This entire, "trend" is going to continue to die out. Just like every other iteration of AI has over the past 20 years.

    1. veti Silver badge

      If I could work my will, anyone who says "this isn't AI" should be required to follow up with a definition of what "intelligence" really is. If they can't do that (and as far as I know, it's a question that remains unanswered), then they're clearly talking bollocks.

      1. Lomax
        Thumb Down

        Referring to machine learning as "artificial intelligence" is akin to calling an aircraft an "artificial bird". While it's true that we do not have a good understanding of what exactly "intelligence" or "consciousness" is it's still misleading to claim that an algorithm we do understand is "intelligent" or "conscious". The very fact that we are lacking a definition should in itself be enough to disqualify the claim. Outside the realm of religious beliefs the onus is usually on the one making the claim to prove that it is so - which is difficult to do without a working definition. But I would say that "AI" is neither intelligent nor artificial, since all it can do is to regurgitate a remixed version of its training data, which I think is very different to what we usually mean by "intelligence" or "consciousness". For one thing a conscious intelligent organism is able to solve problems that they have not been previously trained on. For another intelligence implies the ability to have new ideas, test them, keep what works, and discard the rest. LLMs do not do this; they can only look to their training data and the input from their (occasionally) intelligent users to determine "truth". Which explains why they tend to have such a poor grasp of the concept.

    2. Pulled Tea

      First and foremost. None of this is actually, "AI". The tech companies have done a great job using the term but none of it is.

      I'm compelled to disagree with you, but only because I feel like the truth is far funnier.

      In actuality, there is no such thing as “artificial intelligence”. More precisely, there's no rigorous definition of what it is, and the organiser of the workshop that coined the term did it, because, and I quote that Wikipedia article, to “[avoid using] cybernetics which was heavily focused on analog feedback, as well as [McCarthy] potentially having to accept the assertive Norbert Wiener as guru or having to argue with him.”

      So not only was artificial intelligence envisioned as a separate field as a marketing exercise for a bunch of academics to sell their ideas to the DOD, but it happened because John McCarthy didn't want to constantly butt heads with Norbert Wiener.

      I don't care what you say, but knowing that AI itself had such petty origins is just so funny to me.

  17. sarusa Silver badge
    Devil

    > What about Microsoft Copilot? Isn't it going great guns? It appears not.

    Copilot is just 'AI' Clippy. Constantly popping up with worse than useless suggestions.

    If you don't care that what it does is actually correct, like for summarizing a pointless meeting, then sure it's 'useful' because nobody actually cares. In most cases I care.

    There are a couple use cases for LLM 'AI', like generating pictures and video and text, or they're great at detecting patterns in things like for medical diagnoses because that's literally all LLM training is - the world's most sophisticated pattern detection. But none of these are anything Microsoft is involved with. It's worthless except as a curiosity for most consumers or employees except in hallucinating stuff for entertainment value. And Copilot is just terrible at that - which is what you'd expect from a Microsoft tool.

  18. EdSaxby

    I've come to a late stage in my career when I am looking for a job after 35+ years.

    I have found that solid experience counts for little and all senior technical job roles these days are pushing "AI-this" and "AI-that".

    As someone who has had a career built on employing logic and evidence in the technology domain, I struggle to buy into the unbridled hype that recruiters are pushing. Those of us with grey hair have seen bubbles before.

    I'm not a luddite, I regulalry use AI for summarising and polishing content but I have not seen a use case much beyond this in the business realm. Certainly, AI generated answers to technical questions still need a lot of judgement applied...and yet I have seem junior engineers dashing around treating them like gospel.

    It's a shame where too money sloshing around IT distracts us from real progress.

  19. Andrew Williams

    I am still baffled that environmentalists and global warming folk haven't reached the point of burning the AI folk at the stake. The amount of energy required for AI isn't a good thing apparently.

    1. Lomax
      Facepalm

      Maybe my irony detector needs new batteries. Are you saying that a thing that is a thing isn't a thing because it's just a thing? Does the truth even matter? Who can tell these days.

      The world’s data centres are using ever more electricity. In 2022, they gobbled up 460 terawatt hours of electricity, and the International Energy Agency (IEA) expects this to double in just four years. Data centres could be using a total of 1,000 terawatts hours annually by 2026. “This demand is roughly equivalent to the electricity consumption of Japan,” says the IEA. Japan has a population of 125 million people.

      https://www.bbc.co.uk/news/articles/cj5ll89dy2mo

      As the world heats up toward increasingly dangerous temperatures, we need to conserve as much energy as we can get to lower the amount of climate-heating gases we put into the air. That’s why the IEA’s numbers are so important, and why we need to demand more transparency and greener AI going forward. And it’s why right now we need to be conscientious consumers of new technologies, understanding that every bit of data we use, save, or generate has a real-world cost.

      https://www.vox.com/climate/2024/3/28/24111721/climate-ai-tech-energy-demand-rising

      Google admits in its lat­est envi­ron­men­tal report: “Our [2023] emis­sions […] have increased by 37% com­pared to 2022, despite con­sid­er­able efforts and progress in renew­able ener­gy. This is due to the elec­tric­i­ty con­sump­tion of our data cen­tres, which exceeds our capac­i­ty to devel­op renew­able ener­gy projects.”

      https://www.polytechnique-insights.com/en/columns/energy/generative-ai-energy-consumption-soars/

      Researchers have been raising general alarms about AI’s hefty energy requirements over the past few months. But a peer-reviewed analysis published this week in Joule is one of the first to quantify the demand that is quickly materializing. A continuation of the current trends in AI capacity and adoption are set to lead to NVIDIA shipping 1.5 million AI server units per year by 2027. These 1.5 million servers, running at full capacity, would consume at least 85.4 terawatt-hours of electricity annually—more than what many small countries use in a year, according to the new assessment.

      https://www.scientificamerican.com/article/the-ai-boom-could-use-a-shocking-amount-of-electricity/

  20. TSM

    It's funny that some people have expressed that Copilot does a decent job of summarising meetings etc., because that's basically the one thing I've seen it used for and the results can be pretty dire.

    Here's a recent effort, anonymised. [A] through [J] are specific phrases or sentences (for the curious, [H] is related to [G], so that's not a non-sequitur when we get to it):

    Meeting notes:

    [Words from [A]]: [People] discussed an issue with [A]. [Name] mentioned that [B], and they are trying to verify [C].

    * [Words from [A]]: [Name] raised a concern about [A]. [B], and they are trying to verify [C].

    * Verification: [Name] asked [Name] if [C]. [Name] was not aware of the specific issue and needed more information to provide a definitive answer.

    * [Words from [D]]: [Name] explained that [D], so [E].

    * [Words from [E]]: [Name] mentioned that [E]. [Name] inquired about [F].

    [Words from [D]]: [Name] explained that [D], so [E]. [Name] inquired about [F].

    * [Words from [D]]: [Name] explained that [D], so [E].

    * [Words from [E]]: [Name] mentioned that [slightly reworded E] and [Name] asked if they could [F].

    * [Words from [F]]: [Name] inquired about [F], and [Name] explained that [E], and [G].

    [Words from [G]]: [Name] asked if [G] once month-end is closed. [Name] confirmed that [H] and offered to [I].

    * [Words from [G]]: [Name] asked if [G] once month-end is closed. [Name] confirmed that [H].

    * [Words from [I]]: [Name] offered to [I] to help verify the [part of [G]]. [Name] requested the report to [J].

    * Verification: [Name] expressed concern about verifying [part of G] to [J]. [Name] agreed to [I].

    The meeting then moved on to a related topic, which was summarised with equal concision.

  21. FIA Silver badge

    Personally, I found AI much more useful when I discovered the 'system prompt'.

    This is the prompt that tells the AI how it should behave, usually something like 'You are a helpful assistant'.

    My friend uses AI on his security system to summarise the events captured... "This image shows a person walking down the street.", and that kind of thing.

    However, with a little massaging of the system prompt it now summarises all images like a pirate, and makes ficticious inferrances as to the piraty nature of delivery people and the like: "...this person maybe delivering a treasure map me hearties..."

    If that's not progress, then I don't know what is.

    Arrrrr.

  22. Nematode Bronze badge

    The obvious problem for me...

    ...is the I/O.

    For LLMs that means words in (whether written or spoken), and largely words (or pictures) out. So, in Elon's fantasy world where no jobs will exist, who is going to input the words and take the necessary action on the output words? LLMs are not so clever that they can magic up the reality of "the brilliant solution that no-one could possibly have conceived without it."

    For things like machine learning, e.g. detection of cancers, the ML and subsequent analysis is a fraction of the total workflow. Ok, AI can write appointment letters, but I can't see a machine taking over telling someone they have what looks like a cancer, and certainly not actually administering surgery, or radio or chemotherapy.

    And ultimately AI has proven many times that it has no handle on Reality, against which to check itself. At present it can't even check a citation it gives actually exists and isn't itself a fabricated amalgam of a bunch of maybe-relevant citations. Ok, some humans suffer that too, but you can do something about humans and you'll never be able to train AI on what constitutes reality, only someone's words on what they believe reality is.

    AFAICT, the best use case under the AI banner remains the expert system type of application, with proper curation and checking of data in and output, and we'd rightly still call that an Expert System.

  23. Anonymous Coward
    Anonymous Coward

    My recent AI experiences.

    1. Eufy/Ring still can’t detect a person/pet correctly on their cameras.

    2. ChatGPT still can’t generate code for anything vaguely complex.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like