The Register Home Page

back to article AI still doesn't work very well, businesses are faking it, and a reckoning is coming

Enterprise organizations are still struggling to figure out how AI fits into their business, and that may be for the best because it will take time to understand any problems caused by AI-generated code and content. "No one knows right now what the right reference architectures or use cases are for their institution," said …

  1. Nate Amsden Silver badge

    bring it on

    Get us back to normal infrastructure costs. Pop that bubble already, the longer it goes the worse it'll be. These AI things are wrecking havok on the tech world and beyond.

    I can't help but wonder how much of a role the sycophant nature of the chat bots contribute to talse confidence in deploying the tech(even if the tech is from another vendor). I've not found a use case for myself to need LLM anything just yet. Fortunately my wife doesn't care about it either though she has at least one extended family member who is obsessed with it. Scary stuff.

    For some it really seems like a mental illness of some kind to be totally brainwashed like that. I've seen similar from some(not all) people pushing public cloud over the past 15 years, like a cult.

    I was thinking not too long ago Elon himself was railing against all this, only to completely reverse course. Sort of how Oracle was completely against cloud, until they reversed course (Oracle's case was they felt they could get more money out of customers by renting the infrastructure). I read claims a while back that Elon eas obsessed himself with a chatbot(s).

    Side note I totally misread the word shop as slop when reading the article

    "global consultancy PwC and have set up their own shop to help shepherd organizations toward an AI strategy."

    1. Anonymous Coward
      Anonymous Coward

      Re: bring it on

      Too many people underestimate the danger of sycophancy and how it affects humans on the receiving end.

      1. ZedaZ80

        Re: bring it on

        One interesting thing related to this-- the sycophancy grosses me out, and I know it grosses out others who have fiddled with various LLMs. I wonder why it grosses out some people and not others (genuinely wonder).

        1. Doctor Syntax Silver badge

          Re: bring it on

          Some are grossed out, some can't get enough and positively seek it. Trump would be an example of the latter.

          1. cyberdemon Silver badge
            Devil

            Re: bring it on

            Mirror Mirror on the Wall

            Who is the Bigliest Prez of them All

        2. doublelayer Silver badge

          Re: bring it on

          I wonder if it's related to the trust people have in the results. I think everyone has a level of pleasure they get from praise, but there's also a lot of gates deciding whether you get that, with praise that doesn't qualify being off-putting. Being told you're clever by someone you respect is nice. Being told that by someone treating you like a child is irritating. Being told it by someone you think is unqualified to know whether you are or aren't clever is probably close to neutral.

          If people run queries through LLMs and come to think of LLMs as intelligent, then they might appreciate praise from them because they associate it with intelligent responses. If they run queries and decide LLMs are crap, then it's irritating. I think the types of queries people run is one indicator of how much trust they have in LLMs. For instance, many of the first queries I ran against chatbots were ones I already knew the answer to because I wanted to test how accurate and complete the responses would be, something that wouldn't work as well if I didn't fully understand what they were talking about. The bots frequently failed, hence my negative impression.

          1. retiredFool

            Re: bring it on

            For me generally, when someone starts getting too praisy, my first thought is always what do they want from me. If I really do something that deserves praise, and the person who sees it qualifies to give it, then I accept it with honor. Otherwise, back to what do they want from me.

            1. Paul Crawford Silver badge

              Re: bring it on

              It is like me and someone suddenly very friendly. I know i am not good looking or charming so my suspicion gets up really quickly in those cases!

              1. Mimsey Borogove
                Pint

                Re: bring it on

                i am not good looking or charming

                Honesty is preferable to either of those, so have one on me!

        3. Steen Eugen Poulsen
          Facepalm

          Re: bring it on

          I started calling AI love AI Psychosis, but someone pointed out it is better labeled as AI dunning kruger effect.

          So you are the CEO of insanely valued company, you tend to think you are really hot shit, so if you ask the bot about things you know nothing about and it suck up to you, you are going to think you got it right, except neither the bot nor you is qualified to make that call.

          1. MonkeyJuice Silver badge

            Re: bring it on

            This. Plus, when I am pair programming, or bringing a stuck server back up, I expect some rational pushback, or at least to be asked to clarify what I'm doing. Humans (generally) do this when they're not 100% sure of your next move. When an LLM does it it means it's obviously way out of its depth. Which is always. The only time they ever appear to push back is when they are being stupid, at which point now you're arguing with a text generator, which is insane behaviour.

        4. Elongated Muskrat Silver badge

          Re: bring it on

          I think the answer here is that sycophancy appeals to certain "personality types" (I use the term advisedly, because things like Myers-Briggs are complete bollocks). Narcissistic people love to be praised, especially when the praise is undeserved. Unfortunately, narcissists also tend to be power-seekers, so what we're getting here is a feedback loop that encourages the worst kind of people to just carry on with whatever they feel like doing with no checks and balances. Putting people like that in charge of things has predictably bad results, you only need to look at current world politics to see the consequences of not reigning in narcissistic sociopaths.

        5. The Man Who Fell To Earth Silver badge
          Boffin

          Re: bring it on

          AI is a stupid person's idea of a clever bot.

      2. Someone Else Silver badge

        Re: bring it on

        Too many people underestimate the danger of sycophancy and how it affects humans on the receiving end.

        Not the folks at the "social" media slingers...

      3. WageSlave5678

        Re: bring it on

        You're absolutely right ;-)

    2. mevets

      Re: bring it on

      " One bad programmer can easily create two new jobs a year. " - David Parnas.

      I am quite confident AI can smash this number; perhaps creating hundreds or thousands of jobs a year.

      It is early days yet, lets give it a chance to really drive up demand.

  2. Groo The Wanderer - A Canuck Silver badge

    This fellow has hit the nail on the head: There is no substitute for intelligent human beings in the mix, and punishing them for not meeting the wrong metrics just buries the business deeper in the hold of wasted spend,

    1. Inventor of the Marmite Laser Silver badge

      Ah. But is it an intelligent human being who dictates that his company shall use AI?

    2. Anonymous Coward
      Anonymous Coward

      > intelligent human beings in the mix

      That assumes a lot...

    3. Rikki Tikki

      Yes, he makes some excellent points, including:

      "you built an AI system from first principles, it would look drastically different from what's offered today."

      As I read, it (including the last paragraph), he's suggesting that AI can be very useful in supporting the work that people do. But, the current LLM-driven hype isn't doing that.

      1. I ain't Spartacus Gold badge

        Some of the LLM graphics tools are pretty amazing. And can be used by normal users to generate quick art that wouldn't be possible without someone with some talent. I wouldn't want to use it for the main marketing or anything, and you need to check that your pictures have come out right and don't have 7 fingers or 3 legs, but for someone to bang out some quick art for a presentation or a poster or something, it's quite good. Although I don't know how many times you have to prompt and throw away the result, because the people saying how great it is don't tell you how many prompts it took to get anything useable. So when AI companies have to actually make profits, it could be that graphic designers might even be cheaper, but I've not tried so I don't know.

        I had a meeting with marketing a couple of weeks ago. They'd generated a bunch of AI reports for us to use as guides to the Water Regulations, for me to comment on. We regularly publish this stuff, as it keeps the customers coming to us for advice, and hopefully buying our stuff. It's been an effective marketing tool we've used for 35 years now. But writing it is time consuming, seeing as it has to be correct. I was told they'd spent considerable time "prompt-engineering" and then produced utter gibberish. In my one test of Google, I gave a specific water regs related question and got a reasonable answer, that didn't quote its sources, but as a summary was longer than the section in the Water Regs it quoted from. Don't know where the extra info came from, but it was helpful more than accurate.

        These marketing reports were total crap though. Not only did they get the subject matter wrong, but the structure was a fucking mess. The first page started with an "executive summary" which didn't actually summarise the report - had no actions to be taken (i.e. wasn't an executive summary) but was just some random intro text telling you that the Water Regs are important so you have to look at them. I suspect the marketing people got the prompts wrong, but that doesn't explain the report structure also being bollocks. But because they had no idea, they were pretty proud of their reports and would make a good skeleton to base the work on, whereas they just went in the recycling, as complete gibberish.

        That's my limited experience with work AI. Doesn't encourage me to try more, but if I had to do basic art again, I'd definitely try it, rather than raiding the clip-art.

        1. vtcodger Silver badge

          Agreed

          I agree with much of this and upvoted it. AI generated art might useful for many people. But I suspect that if usage wasn't subsidized and true costs (plus profits) had to be paid, the actual usage would be quite minimal. OTOH when one considers the enormous cost of a large scale movie or TV battle or mob scene -- hiring, costuming, coordinating and paying hundreds of even thousands of extras -- it does look like there might be cases where AI is actually cost effective -- at least for images.

          1. I ain't Spartacus Gold badge

            Re: Agreed

            vtcodger,

            As I said in my post, we've no idea what the true cost of machine learning really is. Will they keep having to expensively train new models every few months? Can they make the processing more efficient, so they don't need such vast amounts of electricity? Will there be the volume of sales needed to give decent economies of scale, when the real costs become apparent?

            So much of the tech is nearly there, but not quite. I wonder if we've hit a roadblock. The tech companies all assume that they can keep improving and tweaking their models and guardrails until the stuff works as advertised. As the guy in the article says, if you were trying to design an AI from first principals, this isn't what you'd do. But this is what they can get vaguely working. I wonder if it's already close to its limits of tweakability? Who knows.

            But to pay off $600 billion of investment, committed in the last 12 months, that's minimum $60-$90 billion profit a year just to pay it back - and leccy costs need to be added to that, so it's not like software - where you can make vast profits once you're selling millions of copies.

            1. ecofeco Silver badge

              Re: Agreed

              For some reason, I am reminded of the Gleno Dam disaster in 1923. The post mortem is one of monumental hubris and incompetency and of course, cost cutting.

              https://en.wikipedia.org/wiki/Gleno_Dam

              Long but more detail: https://www.youtube.com/watch?v=l8J92lQ9c8A

            2. MonkeyJuice Silver badge

              Re: Agreed

              ML or LLMs?

              One of these technologies has a future. The other one is being funded.

            3. Groo The Wanderer - A Canuck Silver badge

              Re: Agreed

              The fact the models can't learn new content on the fly just proves they aren't "intelligent" in any useful sense. They're just statistical snapshots, and always out of date and therefore useless for anything except hallucinating and faking news about current events. That's why companies like Apple had to drop the "news summary" feature of their products; the summaries are wrong because they're based on out-of-date information.

              1. jake Silver badge

                Re: Agreed

                "the summaries are wrong because they're based on out-of-date information."

                It's worse than that ... the very models are wrong because they are full of demonstrably incorrect, incomplete and incompatible data, which is otherwise corrupt, stale, and/or irrelevant by it's very nature.

              2. Elongated Muskrat Silver badge

                Re: Agreed

                A great example of this is to give ChatGPT a date of birth and ask it how old a person born on that date is. Unless they have fixed it (by duct-taping on yet another hacky fix), it will get it wrong, based on the date of its training data.

                1. Mainframe Greybeard
                  Happy

                  Age related question

                  We have an internal company chatgpt type AI thingy where I work. I asked it: "if i was born on dd Mmmm yyyy how old am i today?" (my DOB obfuscated for security reasons LOL) and it came back with exactly the correct answer.

                  Does that mean the place I work has beaten chatgpt at its own game??

        2. Jedit Silver badge
          Flame

          "that wouldn't be possible without someone with some talent"

          Exactly - and that person with talent isn't getting paid. A very astute person has said that the purpose of AI is to give wealthy people access to skill without giving skilled people access to wealth.

          1. ecofeco Silver badge

            Re: "that wouldn't be possible without someone with some talent"

            I've heard that one as well.

            Really hits the nail on the head.

        3. ecofeco Silver badge

          It's always marketing, innit?

          You can always trust them to get everything wrong. And they always think they are in charge.

        4. Elongated Muskrat Silver badge

          The problem with "LLM graphics tools" aka image generators, is that they are "trained" on real work that real people have worked hard on, and then churn out permutations and variations of that work, essentially copying it, without giving any credit to the original artists, or paying any royalties for use of copyrighted work.

          So, what they actually are, is a computer program designed to take money out of the pockets of struggling artists, and put it into the pockets of CEOs. As such, they can fuck right off, and as far as I am concerned, they need regulating properly, so that such theft is recognised as such.

    4. kventin
      Coat

      so: there _is_ a place for intelligent design after all.

      and to read it in The Register's comment, of all places…

  3. Anonymous Coward
    Anonymous Coward

    Demanding a discount from companies using AI is something I didn't anticipate, but it is very amusing. I wonder how long it'll be before we suddenly stop hearing about AI startups using AI agents to AI engineer their AI toothbrushes to AI clean your AI teeth AI AI AI.

    1. Yorick Hunt Silver badge
      Devil

      ¡AI caramba!

      1. Groo The Wanderer - A Canuck Silver badge

        Aaaaaiiiiiiiiii!!!!! *Leaps off cliff*

        1. Anonymous Coward
          Anonymous Coward

          AI, AI, AI, AI, canta y no llores, porque cantando se alegran, cielito lindo, los corazones?¿

          1. Someone Else Silver badge

            Isn't "AI" Canadian for "yes"?

            Just like "Chat GPT" is French for "Cat, I farted."

      2. I ain't Spartacus Gold badge

        It's no coincidence that half of Iran's name is AI. The war in Iran is looking like it's going to cause at least a mild global recession, and that's going to pop the AI bubble by the Summer, as interest rates go up and $600 billion of speculative investment suddenly looks a lot riskier. Especially as the energy use is so vast, just as the global LNG market is grinding to a halt.

        A correction has obviously been coming to the AI market - it makes no money and costs fortunes, so things were bound to shake out dot.com bust stylee, but as Trump Leeroy Jenkins-es into Iran with no plan to keep the global energy markets going - suddenly that correction is looking pretty immediate.

        As the famous quote says, "economists have predicted 20 of the country's last 3 recessions". But this really does look likely now. Wonder if people like Larry Ellison might end up regretting backing Trump?

        1. DrewPH Bronze badge

          I'm really hoping for more than a mild recession; I need it deep enough to bring RAM prices down to affordable levels again.

          1. WageSlave5678

            Yeah, that;s not how recessions work - your wages stagnate or go down relative to inflation which skyrockets, so you'd be paying more from less income.

    2. Rjan

      It shows you the level of comprehension we're dealing with, if a reader of the Reg didn't anticipate that the savings on wages that ensue from using AI to produce various articles of information, wouldn't result in cheaper prices.

      Market mechanism 101.

  4. hitmouse

    It's not just AI code outputs that are unassessed, it's all the AI-assisted processes of any knowledge worker. The capacity for any part of an organisation to use AI tools is highly variable: but you can be sure that the more inefficient any unit is, the less capable they are of boosting themselves. So there will be process bottlenecks everywhere.

    1. EdSaxby

      I found this YouTube video (sorry!) and the associated research study interesting.

      In a real world comparison between a human and AI in performing typical business tasks (i.e. real work), AI could not match humans in 96% of cases.

      The study really cuts through the hype.

      I have used AI as a tool (akin to a spellcheck on steroids) for my coding, but I truly don't expect it could deal with the range of experience and nuance that creating a piece of development work requires.

      1. Filippo Silver badge

        I use it in a similar fashion for coding, super-autocomplete. Also to search API docs.

        But it can't generate quality code. It fails utterly at tasks of any complexity - see the SQLite rewrite mentioned in the article, and yes, a 2000x performance loss is an utter failure even if the unit tests are green.

        Even at comparatively simple tasks, sometimes it works, but sometimes it introduces subtle bugs that will bite you in the ass down the line, and you can't know without minutely double-checking everything, which takes longer than doing it yourself.

        It's no wonder that workers are reporting being more stressed and tired when using AI. Every coder knows that the worst task you can get is looking for bugs in someone else's sloppily-written and badly-commented code. Well, relying on copilot is exactly that, all of the time.

        1. Steve K

          20171 x

          The Medium article talks about a 20,171x slow-down (for a particular Index scan when a PK is present) - not 2,000x.....

        2. Anonymous Coward
          Anonymous Coward

          You're a hypocrite if you're anti AI but use it in any form

          1. werdsmith Silver badge

            Not to be anti AI but to be anti AI everywhere.

            There are places where AI genuinely boosts a human performance, and I am using it more and more - but it's not autonomous, it's a helper, it helps on demand.

            There is some idea (credulous boss level) that it can be applied in many more places than it is appropriate. You can be anti AI scattergun, but use it appropriately.

            1. I ain't Spartacus Gold badge

              It's a bit like using Wikipedia, or even Google search. If you already have a good deal of subject knowledge, they're a good resource to quickly check something, because you'll probably recognise if it's wrong. So it's more a memory aid than anything. But even there, if it really matters, you need to check properly with multiple sources.

              if you don't have the right subject knowledge, you have to faff around finding the correct search terms, before you get a result, and then you've not sure how accurate it is, or if it's missing key data, and so you have to find more sources. Which, to be fair, Wiki can often be a good place to find some good starting sources (if you're careful).

              So if you know what you're doing, you can direct AI - and generate good prompts, and so have a chance of getting useful outputs. But then you have to check everything, as it's got a habit of inventing stuff. And this is where the problem is - humans are bad at checking. If you have to do data entry it's much quicker and more accurate to enter it all twice, and compare the two to find errors, than it is to enter it once and check it thoroughly. You're slower checking than re-entering, and will miss stuff in checking anyway.

              This makes LLM output most useful for people who already have the knowledge - at which point it has to be quick enough to justify them just not doing the job themselves. There's already work you'd like to farm off to an assistant, but can't, because it takes longer to explain to them, than to do it yourself. And if it takes an unknown number of prompts to get a reasonable output and you have to then check it - you could be onto a losing proposition.

            2. Ken Shabby Silver badge
              Windows

              If I want to dig a trench and rent a backhoe, I ain’t going to think it is a sentient being

          2. doublelayer Silver badge

            For one thing, they never claimed they were "anti AI", so they don't qualify as a hypocrite by your definition.

            For another, that only works if you decide that someone must oppose every possible thing under the big and vague AI umbrella, and that's an unnecessarily broad requirement. A lot of AI is so often wrong that it's useless. There are times where it's often correct, but when it is wrong, it's very wrong, so it's useless. But there are also tasks where you could use it in some ways if you've got a plan and execute it well. I dislike a lot of the people who use AI, but not because they use AI. I dislike them because they use AI and don't correct for its failings or acknowledge the problems they're causing. If you generate good results with an AI step in the middle, that can be acceptable as long as you can continue to generate those good results.

      2. Anonymous Coward
        Anonymous Coward

        A statistical factoid I learned decades ago (from Usenet, if you recall):

        "97.3% of all statistics are made up."

        And a corollary I've observed since then - made up numbers tend to end with an even digit. To add veracity to your fabrication end it with an odd digit.

        1. StewartWhite Silver badge
          Joke

          "What's on the end of the stick, Vic?"

          Pedant alert!

          The quote was actually "88.2% of statistics are made up." and it was the marvellous Vic Reeves who supposedly came up with it (see https://themathguy.blogspot.com/2012/12/882-of-statistics-are-made-up.html).

          1. Loudon D'Arcy
            Joke

            Re: "What's on the end of the stick, Vic?"

            > Pedant alert!

            You wouldn't let it lie!

            1. Anonymous Coward
              Anonymous Coward

              Re: "What's on the end of the stick, Vic?"

              I would have let it lie.

        2. DJO Silver badge

          ...97.3% of all statistics are made up...

          They don't always need to be, it's easy to mislead with accurate numbers, for example there are far more accidents involving sober drivers than drunk drivers therefore statistically it's safer to drive when drunk. We can all see the flaw there, it's similar to Baldrick carving his name on a bullet so he'd own the "bullet with his name on" but in less obvious scenarios omitting the base lines can generate a distorted emphasis (This is the technique employed by many climate change deniers, possibly without realizing what they are doing).

      3. Anonymous Coward
        Anonymous Coward

        So that means that if Agentic AI can be priced below 16% of that human's salary (because it can operate over 4 times as long working 24*7*365) it's competitive?

        1. MonkeyJuice Silver badge

          Only if these humans we're trying to replace recently suffered lifechanging head trauma.

  5. Joseph Haig

    The next year 2000?

    In the run up to the year 2000 there were lots of aged COBOL programmers coming out of retirement to receive very high salaries whilst fixing the millennium bug. Are we going to see the same in 5 or 10 years time when companies realise the have built up huge piles of unmaintainable technical debt and there are no software developers anymore to sort it out?

    1. LucreLout Silver badge

      Re: The next year 2000?

      That would be the absolute sweet spot for me. Recently retired by then, come back for a last blast couple of years on a contracting gig at insanely high day rates to finish teaching millennials and gen z proper architectures and coding in a post AI world.

      I'm not holding my breath mind. What we want in life is so rarely what we get. At least once you're 40+.

      1. MikeTheHill

        Re: The next year 2000?

        I don't know about a sweet spot. Debugging AI generated code that no one has understood for the last decade? That would be a nightmare far worse than debugging old-school COBOL. The chances of the AI code having any decent documentation or having any documentation that correlates with the code has to be vanishingly small. And, after being molested by generations of AI debugging it would be completely inscrutable.

        I guess the possible upside is that the AI code might be so bad that the owner may have no choice but have you rewrite from scratch.

        1. zapgadget
          Coat

          Re: The next year 2000?

          Surely we can just ask an AI to explain the code?

          1. Joseph Haig

            Re: The next year 2000?

            > Surely we can just ask an AI to explain the code?

            Have you ever seen an AI running away screaming?

          2. Scene it all

            Re: The next year 2000?

            Human programmers are very lax at putting comments in the code to explain, not WHAT it is doing, but WHY it is doing it, and WHEN it will be doing it. I wonder if AI puts in useful comments. I think of code comments as messages to a future me who might need to debug it years from now.

    2. MonkeyJuice Silver badge

      Re: The next year 2000?

      I think at the sheer amount of opaque/insecure/buggy code these things generate, and the fact they tend to want to change every single line in every single file per commit, if you tip over your stack with this stuff the only solution will be to burn it to the ground and start again. You might be able to roll back to a working commit, but trying to rationalize all the data it'll have mangled will be a nightmare from hell.

      Sounds expensive. I wouldn't like to be THAT company.

    3. Anonymous Coward
      Anonymous Coward

      On the subject of which...

      Back in the days when I was working on the company Y2K issues, the insurance companies did exactly the same thing: You are not covered by issues caused by faulty date calculation (which nicely covered them for the unix epoch as well...).

      Guess when companies suddenly started to get REALLY serious about Y2k fixes?

  6. spoovy

    C suite failure as per

    I'm honestly amazed it has even gone this far. It seems obvious to me that the only reason software generally works well is because it has passed through a series of people whose reputations and financial wellbeing all depend on it doing so.

    When nobody is directly responsible for the quality of the end product, it's going to be sh!te. Same goes for every produced good or service in all history, no?

    1. Herring`

      Re: C suite failure as per

      Software is not sold to the people who use it or to the people who would have to manage it. It's sold to the C-Suite by spivs with expense accounts

    2. Lee D Silver badge

      Re: C suite failure as per

      Responsibility is the key, as per the old IBM presentation notes.

      Who is honestly out there saying "I'll take the hit if this goes wrong" when things involve AI? Because I certainly won't be.

      MS are pulling back from it (another article out there today) precisely because the people responsible for business systems and data are saying "Nope, I'm not taking responsibility for that".

      You can throw what you like into software development practices, the question is who's going to sign off on it? Only a fool at the moment.

      That's why Open Source projects are rejecting AI pull requests, AI bug reports, and the like. They're not going to take responsibility for the AI, and the people running the AI AREN'T taking responsibility for it either.

      It's all fun and games until the consequences hit. "Vibe coding" is the same kind of thing. Great - until you have a problem and have to guarantee that your code isn't going to cause problems for multi-billion-dollar businesses out there.

      You only have to look at who is or isn't taking responsibility to see what's going to ultimately happen here.

      The AI companies are saying "you can trust us" but also "we're not taking responsibility". The people using those to put them into software are saying "not our fault". The people trying to fix the breakages are saying "this isn't my responsibility, I shouldn't be doing this". And the people using it are saying "I don't care why it's broken, I just need get my data out / have my system I paid for work".

      Tesla car hits something - they say driver's fault. AI fails to kick in - they say driver's fault. AI gives up and hands over control - they say driver's fault. Authorities - they say driver's fault. Victims - they're suing the driver because Tesla won't take responsibility. And so on. Same story.

      Nobody in the entire chain is taking responsbility for it. So it's going to crash and burn into a bunch of lawsuits eventually. Something big will go down, someone will be blamed, they'll push responsibility away, and then people will realise - NOBODY is taking responsibility for this. And then that responsibility will be assigned to someone. And then it'll all come crashing down when they insist "So you're going to give me a written guarantee that this can't happen again, with penalties for if it does?" and then they'll be scrambling to get humans involved again, just so they have someone to blame/sack.

  7. Steve Davies 3 Silver badge
    Black Helicopters

    re: AI still doesn't work very well

    Try telling that to the millions and millions who have or are about to end up on the breadline.

    Some of us have been warning about this for years but were drowned out/shouted down by the young MBA's in sharp suits who sold the 'AI Vision' to the world and are now sitting in their beachfront villas saying 'Suckers' at the rest of the world.

    There will be a big reckoning in the AI world very soon and perhaps... just perhaps a few of the companies that have been suckered in by the wonders of AI will see the light and reverse course before it is too late.

    1. LucreLout Silver badge

      Re: re: AI still doesn't work very well

      In financial markets there's a moment of capitulation where everyone fighting against the start of the direction change just accepts it. It's a necessary precursor for the direction change.

      I've gradually given up on rational arguments against AI. It's just not stopping no matter how much I think it should. Will the bubble burst? Eventually yes. Will it go away after? No, not now, but it would change.

      I want you to be right, I really do, but I've finally capitulated and accepted you won't be. The sooner you join me the sooner this shit show can change direction.

    2. Excused Boots Silver badge

      Re: re: AI still doesn't work very well

      "Try telling that to the millions and millions who have or are about to end up on the breadline.”

      Oh maybe best to invest in the Pitchfork and “Torches” company?

    3. cd Silver badge

      Re: re: AI still doesn't work very well

      Brother, can you spare a token?

  8. Anonymous Coward
    Anonymous Coward

    I've made the same arguments cited in this article to people in my company - and I am just seen as 'not being with the programme'. I've documented cases where AI just got things wrong, and it gets brushed off. C-Suite are pushing people to use AI even where it is questionable. The amount of technical debt being built as people are cycled away from their jobs after being 'replaced' by AI is astounding. There's very few people left to fix the mess AI makes.

  9. Ol'Peculier
    Mushroom

    Mice to see somebody finally talking sense. There are bits of this I'm going to try to memorise to bring up at meetings if and when AI comes up.

    1. You aint sin me, roit
      Pirate

      BOFHs around the world sit up and take notice...

      'Oh you're producing your PowerPoint decks with AI. Well I want to pay you less.' is only the start...

    2. StewartWhite Silver badge
      Coat

      "Mice to see somebody finally talking sense." (sic)

      There's the problem, it's only Pixie and Dixie that can see what's going on - not the "Smartest Guys in the Room".

  10. Tron Silver badge

    For now, ignore AI and the mountebanks selling it.

    Don't waste your money or your reputation on it. Focus on core services and security. Avoid SaaS and the cloud. Audit your use of tech. Only digitise stuff if there is a clear benefit. Use paper and non-networked, generic tech instead. Keep your intranet permanently detached from the public internet. Use a second, disposable network for net interaction.

    In short, let others run down the latest rabbit hole. Hold back. You can join in later if any of this stuff works and is commercially viable (which is unlikely). Let some other mugs be the crash test dummies for this.

    1. Ol'Peculier

      Re: For now, ignore AI and the mountebanks selling it.

      Modern version of Everybody's Free (To Wear Sunscreen)!

    2. vtcodger Silver badge

      Re: For now, ignore AI and the mountebanks selling it.

      Agree, but I think the strategies suggested in this post are perhaps more about internet security than AI. Not that AI in its current state has much to recommend it.

  11. deeredd

    Think of the bloody children!

    There's kids in my youngest's class who do _everything_ with LLMs and some of the crappier teachers use it to set and mark work.

    Come exam time it's a catastrophe but come any kind of mental work it is too, reach for the prompt and paste the results.

    It is heartening at least that there is a hardcore of LLM haters who complain about staff using it and will never use it themselves.

    1. AnAnonymousCanuck

      Re: Think of the bloody children!

      The kids use does not bother me too much. Language ability and manipulation, defining requirements, dealing with lies and hallucinations, all useful skills.

      The teachers use horrifies me at this point in time, however, once trusted data sets are used for training.....

      IMHO

      AAC

    2. werdsmith Silver badge

      Re: Think of the bloody children!

      In fact kids are using it in maths to walk through steps of a process and to check the outcome. At each step the LLM can give further assistance and help. Amazingly it has become an effective surrogate teacher available 24x7.

      1. DancesWithPoultry
        Stop

        Re: Think of the bloody children!

        I assure you, if there is one thing LLM's are utterly utterly crap at, it is mathematics.

        1. This post has been deleted by its author

        2. werdsmith Silver badge

          Re: Think of the bloody children!

          You can assure me all you like, but you are missing the point and I'm seeing it do a great job before my eyes.

          Not talking about calculations involving real numbers, we already have calculators for that, totally pointless doing arithmetic on LLMs.

          But if you want to see a process for, for example, deriving from first principles or the many tricks and shortcuts there are to achieve the same goal, I am able to see it with my own eyes doing this perfectly well and offering excellent explanations. And they can offer proofs too.

          Because it's not actually doing mathematics, it's not computing some arithmetic output, it is teaching a particular discipline of mathematics, the steps a student needs to learn to do mathematics.

          Of course you don't like the idea and want to scoff at it, but it's the plain truth. So cry on.

          1. Strahd Ivarius Silver badge
            Facepalm

            Re: Think of the bloody children!

            And all that you see is coming from documents real mathematicians published earlier, that the LLM is regurgitating without thinking and without understanding what it is sending to the requester, but providing only because statistically this is the likeliest answer (and not always the best)

          2. DJO Silver badge

            Re: Think of the bloody children!

            And here's the problem, one could happily use an "AI" and see, yes, it got the 20 test principles correct, so they ease off double-checking and the 24th one is utter crap and gets straight past. So we're back to the original problem, it often takes longer to fully check the "AI" work than it takes to do it oneself from scratch and if it's not checked, it is guaranteed to make a error eventually.

            But if it works for you and you are OK with the occasional error, then fine.

        3. jpennycook

          Re: Think of the bloody children!

          the Trading 212 AI tells me that, after my first payment into my Stocks & Shares ISA, I'm 46% of the way to the target it set me. I've put about £500 in, its target is £119,666, I think it's confused.

  12. moarthumbsdownsplease

    AI is just a tool that can be used badly or misapplied just like any other tool. People who sit down and know nothing of the software engineering process and don't consider very specifically requiring the AIs to use certain methodologies are absolutely going to code that works but is slow and terrible to read. Asking the AI to use interface definition documents, mock data, test driven development principles, unit test for both functional and non-functional elements, use source control, create tickets linked to the check-ins, not accept code that breaks the build, produce demos and ask for acceptance, will end up with much better code, but, you know, they need to know all about that before they start with "Hey Claude can you make me my own version of [some real product]" prompt and expect miracles. Stopping them generating rubbish and releasing it is the bigger problem I fear.

    1. Anonymous Coward
      Anonymous Coward

      But shouldn't the AI do all that if it was any good?

      1. MonkeyJuice Silver badge

        That's [whatever AI is]. What we HAVE is transformer models. horay, we're in the future.

  13. Anonymous Coward
    Anonymous Coward

    "It passed all the unit tests, the shape of the code looks right," he said. It's 3.7x more lines of code that performs 2,000 times worse than the actual SQLite. Two thousand times worse for a database is a non-viable product. It's a dumpster fire. Throw it away. All that money you spent on it is worthless."

    Far be it from me to defend anything AI, but in my experience new code that replaces something existing always has initial bugs and/or performance issues. And no matter whether the creation process was probabilistic or not, once the code has been written, the software is what it is. So you can analyse and profile it in a repeatable way.

    According to the original article, the slowness was due to a missed condition that should have use a primary key index, instead of defaulting to a full table scan. That kind of shit always happens with rewrites of all sorts, especially if done by someone who hasn't lived and breathed the original code for a long time, and this particular one seems a particularly simple one to fix - no need to throw away all the code.

    Once the particular bug was fixed, the software will surely run at a far more reasonable rate even if not as fast as the original implementation. So you profile, optimise, profile, and optimise. And fix the bugs that pop up from the weirdest corner cases.

    1. MonkeyJuice Silver badge

      that's neat but...

      LLMs regularly give you an O(n^2) solution and claim it's an O(log n). They're really good at implementing the naive algorithms, because there's lots of repos to copy it from, and the log(n) implementations are fiddly to implement, and considerably beyond the abilities of the damn things.

      Why not just use SQLlite? not only did they spend the time figuring out how to implement it correctly by studying the theory and algorithms, they benchmarked it, ensured it didn't randomly truncate the database file on Tuesdays every third month, and have spent years fielding bugs and fixing minor edge cases. It's also free, rather than spending $10k on tokens over the course of the project that you will, in the end, have to ditch anyway, once the true horrors of the eldritch monstrosity you have created become apparent.

      1. Strahd Ivarius Silver badge
        Coat

        Copilot designed a version of MS SQL that for some reason crashes every 2nd Tuesday of the month.

  14. JimmyPage Silver badge
    Stop

    Deeks argues that if you built an AI system from first principles, it would look drastically different from what's offered today.

    Would still just be a clever pattern matching system though

  15. Anonymous Coward
    Anonymous Coward

    Basically we have legions of people who think that AI has turned them into coding ninjas. It hasn't.

    1. LucreLout Silver badge

      Sure, but we've always had legions of people that thought they were coding ninjas that regularly shat the bed. AI, no AI, same thing in that regard.

  16. Gavsky

    Screening medical tests; summarising documents; checking/searching data - TICK

    Everything else - A highly qualified TICK

    We use AI to answer work-related, process queries - sometimes it's very helpful; sometimes it's very wrong. Couching a question in different ways makes a big difference to accuracy - it simply shouldn't! There IS a 100% right/wrong answer, but you can't trust AI to tell you this.

    AI isn't intelligent, discerning or (laugh out loud) 'sentient'. It doesn't know if it's wrong; the danger is: does the human?

  17. retiredFool

    Insurance

    Interesting the article mentioned insurance. They love premiums, they don't like to pay. So AI will give claims dept one more tool to deny the claim.

    1. Herring`

      Re: Insurance

      I'm not a great fan of insurers but they are in a competitive business. If they say that they won't cover buildings that are at risk from flood/fire/storm, they won't cover ships going through the Strait of Hormuz and they won't cover a business which has outsourced its critical thinking to the stochastic plagiarism machine then it is because the risk is too high. If I asked them to cover me for my plan to fly a 747 inverted under Tower Bridge, they would've said no before they even found out that I have never had a single flying lesson.

    2. Daemonik

      Re: Insurance

      Some insurance companies are already including exclusions for anything caused by AI. Berkley specifically, and the ISO 40 47/48 is excluding AI related problems.

  18. DrSunshine0104

    But did we need to ruin the economy to check that water is wet?

    1. Anonymous Coward
      Anonymous Coward

      According to Schumpeter, yes!

  19. Anonymous Coward
    Anonymous Coward

    AI just needs a good rebrand

    New whale song based jingle, some joss sticks and a 100% bamboo shirt.

    That’ll get the kids behind it.

  20. Wiretrip Bronze badge

    This is brilliant and the most damning critique of the bullshit machines bubble to date! Can't wait to see what Ed Zitron makes of it.

  21. Daemonik

    The Bigger Issue...

    A lot of insurance underwriters are starting to refuse cover for problems caused by AI. Think that's gonna have a bigger impact than anything on business adoption and use. Businesses are going to suddenly get very cagey about using a product where its mistakes aren't covered.

  22. Random as if ! Bronze badge

    Linedin

    Seems to be doing well our of it, with the new LinkedIn influencers and it's use of AI on everything , and the candidates are exactly as their profiles describe them.

    1. jake Silver badge

      Re: Linedin

      I usually use RCA, not DIN ... unless it's a keybr0ad.

  23. Killer B's

    I’m genuinely glad you wrote this.

    In a small, very modest AI project of my own, auto‑drafting responses to frequently asked emails, we learned an essential lesson: AI only works when the problem space, data, outcomes, and edge cases are tightly defined, and when humans remain firmly in the loop.

    The AI removed drudgery from reading each email and crafting a response, not responsibility. Every output was reviewed before release. Quality was guarded. Accountability remained human.

    What troubles me now is that the fundamentals of controls, of validation, of restraint, are increasingly absent in production systems. AI is being waved through quality gates that took decades to learn the value of.

    To borrow Churchill’s logic: speed is admirable, but direction is decisive. Executives chasing volume over value are not accelerating the future: they are mortgaging brand equity for short‑term applause.

    1. jake Silver badge

      "AI only works when the problem space, data, outcomes, and edge cases are tightly defined"

      In other words, you have built an algorithm to answer email. Which many of us have been doing for four or more decades (ever join a MUD, MUSH or MOO? Join an Email list? Or, at another level, run an email server with spam filtering?).

      So why are you wasting time and energy in AI space? It's hardly necessary ... and becoming quite spendy as the builders try to keep the investors happy.

      1. Eye Know

        The honest answer is, people don't have the foggiest idea how to build an algorithm to answer email, however give them an AI agent they can explain stuff to and they will get it done the expensive way. Despite it being enormously wasteful of energy, water and land.

        It's kinda like interpreted code vs complied code back in the day.

  24. Blackjack Silver badge

    So AI slop bubble crash when?

    1. Eye Know

      Soon, oil prices will only go up for a while yet.

  25. JKVR1

    When there is a disruption such as the one AI is causing, it creates a tsunami. Always best not to get caught in the tsunami--it's incredibly tempting, especially in this case--wait for the wave to recede and see what's left on the beach. In the meantime, master AI on your terms. You don't need to put it into wholesale practice, but be better than anyone else where you work when it comes to fully leveraging AI. We are at the point with AI where the wave is JUST STARTING to recede. The key point is not to wait on the beach and wait for the wave to recede along with everyone else. Instead, you want to be the first person on the beach, fully prepared, when the wave starts receding. That would be about now.

  26. Eye Know

    I've found some use cases at work

    I've found some use cases at work, but nothing we would pay for, not with the hit or miss results we get.

  27. An_Old_Dog Silver badge

    Taking Time

    ... it will take time to understand any problems caused by AI-generated code and content.

    No, it will take time to tally and understand the additional problems caused by AI-generated code and content.

    We know about many AI-related problems right now!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon