back to article Apple called on to ditch AI headline summaries after BBC debacle

Press freedom advocates are urging Apple to ditch an "immature" generative AI system that incorrectly summarized a BBC news notification that incorrectly related that suspected UnitedHealthcare CEO shooter Luigi Mangione had killed himself. Reporters Without Borders (RSF) said this week that Apple's AI kerfuffle, which …

  1. oreosRnice

    Turn it Off, or don't enable it from the start.

    It's not that hard. Why are people so hell bend on being just... stupid and lazy.

    1. Jou (Mxyzptlk) Silver badge

      The classical twist of blame here isn't it? The next problems, that it cannot be distinguished at a glance with a clear mark, that it is enabled without warning and so on... There is always a way to twist it victim blaming...

    2. doublelayer Silver badge

      That's fair for Apple's interest in sending random news headlines to iPhones that never asked for it. Those were often irrelevant, but at least they were true. Users can turn those off when they realize how they don't need them.

      Making up headlines and sending those is much worse, and unhappiness with Apple is completely justified, especially for the reporters given credit for the false statements. Consider your reaction if you were working in an office and the person next to you frequently shouted inaccurate statements at you about things you needed to check. Yes, you could and would eventually ignore them, but it wouldn't be your fault if you wasted your time checking out one of those inaccurate statements, the problems caused by any reaction would be their fault, and someone in management might well tell them to stop doing that. We can't make Apple stop doing that, but we have lots of reasons to want them to. Asking users to turn the thing off is not good enough.

      1. stiine Silver badge
        Facepalm

        no more A/B testing, huh?

        Wow, really? Even El Reg uses A/B testing for headlines. Can you imagine the lack of confusion when the practice is no longer permissable?

        1. collinsl Silver badge

          Re: no more A/B testing, huh?

          Yes but those are a) factual 2. written by a human and ♣) journalistically justified (if el reg even does a/b testing at all). The AI generated hallucinations spat out by AppleAI are none of those things.

      2. hoola Silver badge

        And the chances of Apple desisting with thus rubbish system us as likely as a flying pig. All companies pushing AI generated information simply don't care that it is correct.

        Their view is that the AI koolaid is perfect.

        1. Cris E

          "Their view is that the AI koolaid is perfect."

          AI generated content is all about cheap, not at all about content. It's so cheap that the AI koolaid doesn't need to be perfect or even potable, just flowing. And TBH they may get paid more on the incorrect stuff as more folks do the "that can't be right" click-through on the obviously wrong ones. (At least until everyone realizes they can indeed be wrong and then the whole news headline service loses share. At which point they reset and rebrand.)

      3. mattaw2001

        If I can add to your metaphor it's worse - the person shouting inaccurate information is telling everyone **YOU** said it!

    3. AndrueC Silver badge
      Meh

      Why are people so hell bend on being just... stupid and lazy.

      Evolution. Thinking requires energy. We will instinctively take whatever shortcut we can to form a conclusion. And once we've reached a conclusion we are very reluctant to change it.

      It's not a bad way to solve the problems that the universe throws at us but it's more suited to a simpler life. In today's complex societies we need something better.

      1. Prst. V.Jeltz Silver badge

        Laziness is my primary driver to innovate and ,mainly , automate

    4. CowHorseFrog Silver badge

      Unfortuntely being in the *news* for anyting is better or more important than actually being worthwhile or honest or true.

      Just look at the loser influencers that somehow making a millions when they should be cleaning toilets, but even thats probably too good for them.

      1. sabroni Silver badge
        Thumb Up

        re: loser influencers that somehow making a millions

        Yeah, those losers with their millions.

        Ha, suckers, we might not have as much money but we get ours the winner way, working 9-5 for companies that actively hate us.

        1. cyberdemon Silver badge
          Headmaster

          Loser influencers

          The losers are the ones who watch their shitty videos all day instead of having a life

          Loser influencer, influencer of losers, is how I read it.

          How would you like to lose your money today? Buy my shitcoin! Too new for you? How about an old-fashioned pump-and-dump! Got no money left? Buy this overpriced energy drink it won't help you to get a job! .. No money at all?? Ok just watch some more ads then

    5. Ian Johnston Silver badge

      Dunno about Apple, but you can't turn off Google's ludicrously inaccurate AI summaries.

  2. Mage Silver badge
    Mushroom

    It's garbage

    It isn't up to users to turn it off.

    Almost all this kind of so called AI needs scrapped.

    It's dishonestly promoted.

    It's producing garbage when not plagiarising.

    The environmental cost is too high.

    Make it illegal, fine companies 10% of annual turnover if they deploy it,

    Nuke it from orbit!!

    1. Jou (Mxyzptlk) Silver badge

      Re: It's garbage

      Oh no, the AI method of neurological networks and their current implementation is fine. The problem arises from applying it to things where they don't work and ignoring that they don't work.

      A huge amount of AI is used in material research, component optimizing and so on, saving time and money there - and actually saving energy too in the end. But that does make it to the news, and is not abuse-able for marketing "Yay Notepad Need AI Too Yay!"

      1. Doctor Syntax Silver badge

        Re: It's garbage

        They demonstrably don't work in some situations. In that case how do you determine the boundary between those areas where they do work and those where they don't? If you have a hundred or a thousand instances of a system working correctly how can you be certain the next one - or hundred or thousand will still be OK?

        1. The Oncoming Scorn Silver badge
          Coat

          Re: It's garbage

          Where's my (BBC) Minority Report!

        2. Jou (Mxyzptlk) Silver badge

          Re: It's garbage

          That is simple: By not doing it half-assed. But that does not make quick money, that generates slower money.

      2. This post has been deleted by its author

        1. doublelayer Silver badge

          Re: It's garbage

          Theoretically, that would be a benefit. Theoretically, it should be easier to fix an error in something than to make it from scratch. In my experience, that's not how it works. Correcting a bad document into a good one often takes longer than writing a good document in the first place, not even counting any time spent to make the bad document. The theory is only correct when I'm correcting a pretty good document by fixing small grammatical or factual errors.

          When an LLM is liable to making something up and resting large parts of the document on that flimsy foundation, correcting it usually means removing everything but a mostly usable introduction paragraph and trying to do it another time. That doesn't result in less effort expended, because you're multiplying the time it takes to check every fact alleged by the document by the number of times you generate something new, plus any time you need to make manual edits. The further problem is that, if someone decides to skip one of those stages, the document still looks like it is complete, but now it's the kind of thing that results in summary judgements against you from judges annoyed that you're making them fact check your submissions.

          1. Anonymous Coward
            Anonymous Coward

            Re: It's garbage

            Is there any kind of legal remedy for the BBC in this kind of case where Apple could be argued to have "misquoted" them and materially damaged their reputation?

            1. hoola Silver badge

              Re: It's garbage

              Not that is winnable in a meaningful timescale. The main proponents of AI have very deep pockets and have already proved time and time again that they believe they are exempt from laws or any form of social, moral or business responsibility.

          2. Anonymous Coward
            Anonymous Coward

            Re: It's garbage

            "When an LLM is liable to making something up and resting large parts of the document on that flimsy foundation, correcting it usually means removing everything but a mostly usable introduction paragraph and trying to do it another time. "

            This is true. A general purpose LLM will do that a fair bit and no right-minded soul would trust it. But a more focus RAG-driven one used reasonably with final proofing is powerful. The more you do on it the better. The paralegal had so much training data from their archive that their Llama tuned itself in a few days.

            "The theory is only correct when I'm correcting a pretty good document by fixing ... factual errors."

            It goes beyond that, although those account for a lot! It finds contract law stuff that you would never find yourself. And it works a dream in property purchase for those reems of paperwork. A archive-tuned LLM with RAG, that is.

            1. Muscleguy

              Re: It's garbage

              New laws and regulations are issued constantly. How do you ensure your AI knows about them, is interpreting them correctly, has not confused the date passed with the date coming into use etc etc?

              The law does not stand still.

              1. stiine Silver badge

                Re: It's garbage

                I don't know about the EU, but in the US, we aren't supposed to have any secret, or copyrighted, laws*. The site that comes to mind is https://law.cornell.edu/ which should be perfect for scraping.

                * - I'm sure we do, but they are the exception, not the rule.

                1. doublelayer Silver badge

                  Re: It's garbage

                  There aren't secret or copyrighted laws in these cases. There can be exceptions, for example where a law mandates a standard and ISO won't give you the standard without payment, but most cases don't involve that kind of thing so we can ignore them for now. The problem is that, even when you scrape all the laws and feed them into an LLM, they can easily mistake things the way they mistake lots of other things. A law means you are allowed to conduct a certain action, and you are sued for conducting that action, sounds like a match. Except the LLM has not noticed that the law allows you to conduct that action if you are a law enforcement officer in active duty following a disruption to communication caused by a serious natural disaster or terrorist attack, but that only appeared once in the training data so the LLM didn't recognize that you're none of those things.

                  Best case: a lawyer, paralegal, or other legal person reads the produced document. They weren't aware of the law, so they look it up. In the summary, they realize it doesn't apply to you. They throw out the document and start again. Maybe the LLM will produce something correct the next time. Result: the time to generate the original document and the time to review it for errors is lost.

                  Average case: A lawyer hands the document to a paralegal and says "check this". The paralegal reads the document and finds the reference to a law. They spend a while reading the text of that law to confirm that, even though the summary seems to limit it, the LLM which is supposed to be the next great thing may have found a cool loophole which will get this client off. They spend several hours checking this to realize that it doesn't help. They report their problems to the lawyer. The lawyer sends the report to the prompt generator. The prompt generator makes a new document and the process repeats. Result: several hours added to your legal bill.

                  Worst case for now: The lawyer hands the document to a paralegal and says "check this". The paralegal sees that a law is mentioned and sees the quote that the action is allowed. They check that the law exists, and it does. They check whether the quote is in there, and it is. They send the document back approved. Result: "Guilty. We are also considering contempt of court charges for council for the defendant."

                  1. collinsl Silver badge

                    Re: It's garbage

                    > Result: "Guilty. We are also considering contempt of court charges for council for the defendant."

                    Or they get reported to their bar association for professional misconduct and potentially lose their jobs.

        2. ShortLegs

          Re: It's garbage

          Law firms trust AI? After some recent scandals involving AI "checked" case law, complete with plaintifs and outcomes, Im suprised anyone would let an AI anywhere near a law firm

          https://blog.burges-salmon.com/post/102ivgu/a-cautionary-tale-of-using-ai-in-law-uk-case-finds-that-ai-generated-fake-case-l

          https://www.cnbc.com/2023/06/22/judge-sanctions-lawyers-whose-ai-written-filing-contained-fake-citations.html

          https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-made-up-by-chatgpt-judge-calls-it-unprecedented/ <- same the cnbc report

          1. Anonymous Coward
            Angel

            Re: It's garbage

            Those cases are rightly held up as a warning. They are pretty dumb examples though.

            Here is one: A charity submits legal documents against a defendant. A small technical error, which is very easy to do and miss even when checked by a few people, that error gets picked up by Llama. The charity have time before court to resubmit their action and not lose 6-8 weeks for a new action to be raised after the error gets pointed out by the defendants solicitor at hearing. That happened 3 days ago. The charity would have lost thousands more pounds than they already have.

            That action alone easily paid for the 700£ it costs to set up a local Llama.You cant argue with coin.

            If you are talking money, then check everything 10 times anyway.

            Well-heeled chambers use them. Llama. They have so much archive for training the AI it is very reliable after checks and cross-checks, etc.

            1. nobody who matters Silver badge

              Re: It's garbage

              That makes the assumption that the lawyers for the charity concerned had checked the documents fully and properly and had missed the error. Once you start to rely on Llama to pick up small errors, it is but a short step to using to find the larger errors too, and not actually have the lawyers themselves check the documents properly to start with.

              What then happens when Llama makes an error, or misses something (which judging from the number of mistakes that these systems do seem to make in other scenarios, is inevitably going to happen at some point).

              I would be disinclined to use any legal practice that starts to rely on these systems in their current state.

              Oh, and if I did, don't even think of billing me several hundred pounds an hour for machine generated output ;)

              On the other hand, as these sorts of system develop, become more accurate, more reliable, less costly to run and personal computer capacity and ability improves, it will perhaps become possible to spend a relatively small sum on a PC level version to do the same job, and cut the expensive lawyers out of the loop altogether ;)

              1. Cris E

                Re: It's garbage

                There's been software to check cites for decades that's saved mountains of costs over those years. Automation of the rote tasks of law has always been a high priority because of the expense and importance of the work, so turning the next generation of tools on the problem is pretty much automatic. The expansion of AI into the next layer of abstraction is arriving now, is immature, still requires an extra set of eyes, but is improving constantly. Well-trained and well-maintained LLMs can be hugely important just as simplistic hack jobs are hugely problematic. Blanket rejection of use of these tools is just as ignorant as blanket reliance on them.

            2. sabroni Silver badge
              Devil

              Re: Well-heeled chambers use them.

              It's always the rich who are penny pinching bastards. Doesn't make them clever when they jump on the bandwagon, just greedy.

        3. CowHorseFrog Silver badge

          Re: It's garbage

          Lets be honest even without leveraging AI in any form, we could easily fire at least half of all office workers and it wouldnt make a difference.

          Most people at work dont actually help or add value in any form, they are simply there because the system is broken and continues to allow fakes to exist.

          1. hoola Silver badge

            Re: It's garbage

            How would it not make any difference?

            You reach a point where there is no market for whatever you do because nobody can afford it.

            1. CowHorseFrog Silver badge

              Re: It's garbage

              Fake jobs for people who contribute nothing is not a positive benefit for society, its hte complete opposite.

              We could be doing many wonderful things for the planet if money wasnt wasted on these parasites.

        4. Muscleguy

          Re: It's garbage

          So how does it handle the complexity of Scots Law vs English or Welsh or Northern Irish for that matter? or different State laws in the US?

          There isn’t just one law in the world.

          1. stiine Silver badge

            Re: It's garbage

            I would presume that part of the prompt would be "for submission to the clerk of the xxxxx court."

            I suggest that all LLM-written complaints/charges should be handled in such a way that if the case is dismissed because of grevious errors, that it has to be dismissed with prejudice.

        5. Anonymous Coward
          Anonymous Coward

          Re: It's garbage

          Post that shit AC as much as you like, but you are basically saying "I am ashamed of this opinion".

          That shame serves a purpose. It's to make you think about what you are saying and why you are saying it.

          Be brave and post pro-AC stuff with your handle so we can see all your posts together and find out whether you genuinely believe this bollocks (in which case you haven't been following how well AI deals with legal processes) or are just trolling us.

          1. nobody who matters Silver badge

            Re: It's garbage

            ....says the person posting AC :)

    2. Adam Foxton

      Re: It's garbage

      Just treat it as something that is considered to be deployed by the most senior executive of a company. Hold them- or the entire C-suite- legally responsible.

      1. CowHorseFrog Silver badge

        Re: It's garbage

        Executives responsible ?

        Is this a joke ?

        WHen. has an executive every been responsible. THe executives at B signed off on changes that killed hundreds of people and they got paid tens f millions ...

        Oh yes another example of corporate mindspeak that is completely the opposite of what the word actually means.

        Responsible pairs well with Family and Culture in corporte bullshit lingo.

    3. Groo The Wanderer - A Canuck

      Re: It's garbage

      Agree completely. Biggest pyramid scam on the planet.

      1. Mage Silver badge
        Thumb Up

        Re: pyramid scam

        Interestingly, Charlie Stross wrote that AI was the next big scam after Crypto-coins and then NFC.

        1. Crypto Monad Silver badge

          Re: pyramid scam

          NFC works pretty well and is non-controversial. Do you mean NFT?

    4. Anonymous Coward
      Anonymous Coward

      Re: It's garbage

      The real dishonesty comes from not knowing the models being used behind these services. Transparency is required...

      There are lots of services out there claiming to provide access to 405b parameter models (and larger) but when you test them out, they provide results that are a bit sus.

      It just doesn't make sense for any business to give customers cheap access to a massive model...given the amount of RAM required it just doesn't make sense.

      I don't agree that AI tech is entirely crap, because what I've seen running on dedicated hardware using absolutely huge models is astounding and incredibly accurate with very few absurd hallucinations...however, the smaller you get with models, the whackier it gets...the intersection at the moment for "cheap" models is around 8b-12b parameters...those regularly hallucinate and spit out crap...I believe these are what is widely used and we need more transparency on it because I think a lot of people are being conned.

      It also hands ammo to the old grey beard naysayers that have been against everything new since Windows 95.

  3. Len

    We know LLMs are poor at summarising

    LLMs keep demonstrating that summarising is their weak spot. They can shorten but, because they are inherently stupid and they have no idea what they are doing, these "AI" implementations are unable to distinguish the important from the not important. And that's key for summarising.

    When ChatGPT summarises, it actually does nothing of the kind.

    AI worse than humans in every way at summarising information, government trial finds

    1. sabroni Silver badge
      Thumb Up

      Re: We know LLMs are poor at summarising

      Good find @Len!

    2. ITMA Silver badge
      Devil

      Re: We know LLMs are poor at summarising

      And yet this is one of the key "applications" AI is being sold on.

      My own view is that if you ask someone (you employ) to summarise something, you are expecting them to read and understand it. Then write a summary.

      If they are just using AI to do it, then why are they being paid as they are clearly neither reading nor understanding the material they've been asked to summarise - in other words not doing the job they've been asked to do.

  4. Anonymous Coward
    Anonymous Coward

    With hindsight it is truly amazing that News of the World could generate totally bogus news, while drawing less than 50kW of power to do so.

    If Apple switches to an AI summary of NotW and The Onion, will it be filled with truths?

    1. Anonymous Coward
      1. Headley_Grange Silver badge

        I was going to write something along the lines of ....evidence that the "I" in AI is plain wrong because a real person would read the rock-eating advice and realize that it was crap...... and then I thought, "bleach".

        1. LBJsPNS Bronze badge

          Remember, there are humans who drive off of damaged bridges because they're blindly following their gps.

  5. Anonymous Coward
    Anonymous Coward

    "APPL shares tank after researchers confirm Chinese backdoor in all products since 2013"

    If Apple are allowed to push untrue content from their lying AI without any consequences everyone should be allowed to push untrue content from their lying BS generators without consequences

    if, in this post truth world, lies are free speech, and we have to put up with lies as alternative facts, we should embrace that, produce our own, take it to the extreme. It costs almost nothing compared to building an actual AI, it can be a few lines of python code, can run on a raspberry pi.

    Taking the moral high ground won't save us from this sinking ship of shit so we might as well help it hit rock bottom as soon as possible.

    1. Mentat74
      Trollface

      Re: "APPL shares tank after researchers confirm Chinese backdoor in all products since 2013"

      Here's some more fake headlines :

      "Tim Cook dead at age 56"

      "Apple to give away free Iphones for christmas at all Apple stores !"

      "Personal details of millions of Iphone users leaked on the dark web"

      "Tim Cook stepping down as CEO of Apple"

      "Tim Cook to legally change his name to Tim Apple"

      1. Anonymous Coward
        Anonymous Coward

        Re: "APPL shares tank after researchers confirm Chinese backdoor in all products since 2013"

        Shirley you meant Pyne Apple .... ?

        1. Anonymous Coward
          Anonymous Coward

          Re: "APPL shares tank after researchers confirm Chinese backdoor in all products since 2013"

          Shirley you actually meant "Meth Cooke"

        2. gnasher729 Silver badge

          Re: "APPL shares tank after researchers confirm Chinese backdoor in all products since 2013"

          APPL (a small oil company) has been defunct for many years. The company you make stuff up about is traded as AAPL.

          1. Doctor Syntax Silver badge

            Re: "APPL shares tank after researchers confirm Chinese backdoor in all products since 2013"

            That's what comes of trying to look clever by using an encoding instead of the name.

          2. Anonymous Coward
            Anonymous Coward

            Re: "APPL shares tank after researchers confirm Chinese backdoor in all products since 2013"

            > The company you make stuff up about is traded as AAPL.

            No; the company I made up shit about was APPL and quite deliberately so. I have never made up any shit about Apple ... Your Honour, M'lud.

    2. Anonymous Coward
      Anonymous Coward

      Re: "APPL shares tank after researchers confirm Chinese backdoor in all products since 2013"

      Apple have forgotten there's no Section 230 protecting this. This is all on them, it's Apple which is publishing incorrect and inaccurate summaries of articles by other organisations which could bring them into disrepute. I wonder if the BBC could reach for the UK's famously restrictive libel laws.

    3. John Brown (no body) Silver badge

      Re: "APPL shares tank after researchers confirm Chinese backdoor in all products since 2013"

      "if, in this post truth world, lies are free speech, and we have to put up with lies as alternative facts, we should embrace that, produce our own, take it to the extreme. It costs almost nothing compared to building an actual AI, it can be a few lines of python code, can run on a raspberry pi."

      Oh yeah, it's all true. I was just watching Youtube video the other day that Trump and his MAGAs are all repressed gays and rednecks have guns on the rack in their huge pickups because they all have tiny dicks. It MUST be true!!!

      1. anonymous boring coward Silver badge

        Re: "APPL shares tank after researchers confirm Chinese backdoor in all products since 2013"

        That bit actually is mostly true.

      2. Groo The Wanderer - A Canuck

        Re: "APPL shares tank after researchers confirm Chinese backdoor in all products since 2013"

        Drumpf and company are too indiscriminate and tasteless to be gay. No self respecting gay individual would be caught dead with Musk or Drumpf's hair!

        No, for that crowd, it's "any orifice, any where, any time", I'm afraid. They have neither shame nor standards

  6. heyrick Silver badge

    glue cheese to pizza and eat rocks

    I stopped by a well known burger flinger on the way home from work because it's been a bit of a shit day and I just want to stare at the wall rather than get up off my arse and cook stuff.

    Arguably the cheese is some sort of yellow glue, and to be honest I'm thinking there might be more nutritional value in rocks. At least the chips were hot for a change. Hot chips are nice, cold chips (the usual kind) are a sort of grim that would be a torture in hell (you can have all the chips you want in the afterlife, but they're the cold congealed manky ones that should have been thrown out half an hour ago...).

    1. Phil O'Sophical Silver badge

      Re: glue cheese to pizza and eat rocks

      IIRC those sorts of restaurants in France sell beer as well, so some things are still ok.

      1. This post has been deleted by its author

        1. dangerous race
          Happy

          Re: Examples?

          'Look at the big brain on Lil Endian.'

  7. Ace2 Silver badge

    SWMBO asked one of these monstrosities to summarize a research paper on ivermectin and hydroxychloroquine, for $WORK. The summary said that they are effective for treating COVID. That’s… not what the study said.

    The whole LLM thing needs to be scrapped. It’s a dead-end.

    1. stiine Silver badge
      Coffee/keyboard

      Its not a dead-end, its a roundabout.

      1. John Brown (no body) Silver badge

        entrances but no exits? After all, a lot of this AI shit is coming out of California :-)

        1. StewartWhite Bronze badge
          Linux

          Indeed.

          "We are programmed to receive

          You can check out any time you like

          But you can never leave"

          Penguin icon as that's the nearest El Reg has to an eagle.

      2. amanfromMars 1 Silver badge

        Re: The whole LLM thing is not a dead end, it’s a roundabout @stiine

        I prefer to imagine and do work within everything treating it as a black hole ..... sucking in anything and spewing it out elsewhere all jumbled up as something quite different engaging and entertaining or disturbing and terrifying dependent upon one’s future suspected worth ...... although who/what makes that decision is surely still a riddle, wrapped in a mystery, inside an enigma; without a currently known or readily available master key.

    2. Mishak Silver badge

      "they are effective for treating COVID"

      Unfortunately, you could probably argue it was accurate if it had extended its "research" to the wider internet, as it would probably find more references stating they are effective than those that say not :-(

      Bleach anyone?

  8. JimmyPage
    Alert

    Eventually it will libel

    someone very rich.

    (We need a popcorn icon)

    1. StewartWhite Bronze badge
      Megaphone

      Re: Eventually it will libel

      True but the FAANG have deeper pockets than any individual and both the UK and the USA have "The best justice that money can buy".

  9. that one in the corner Silver badge

    Apple declined to comment ...

    Why should they comment? Other than to send a polite "Thank you" to the BBC for being so complimentary about Apple's services.

    At least, that is what the Apple PR flacks believe, after they read the Apple Intelligence summary of the Beeb's communication (hey, those are busy and important flacks, they don't have time to read it all themselves; those lunches won't eat themselves).

  10. Sorry that handle is already taken. Silver badge
    Stop

    "This accident highlights the inability of AI systems..."

    It's not an accident. They didn't accidentally roll out a headline summary bot. They didn't accidentally fail to verify that an LLM was the right tool for the job. They didn't accidentally fail to check the bullshit output before publishing it.

    1. John Brown (no body) Silver badge

      Re: "This accident highlights the inability of AI systems..."

      Of course not! It was all the fault of a "rouge"[*] engineer!

      [*], sorry, just following what seems to have become an El Reg spelling tradition :-)

  11. Anonymous Coward
    Anonymous Coward

    Disgusting of Apple to threaten jobs like this.

    The BBC is perfectly capable of generating inaccurate headlines on its own, thank you.

    1. navarac Silver badge

      Re: Disgusting of Apple to threaten jobs like this.

      The BBC is certainly NOT UNbiased, that is for sure. Far too WOKE these days.

      1. LBJsPNS Bronze badge

        Re: Disgusting of Apple to threaten jobs like this.

        Don't cry, dearie, you'll smear your makeup.

      2. jospanner Silver badge

        Re: Disgusting of Apple to threaten jobs like this.

        I shit my pants because of woke, it’s everyone’s fault but mine

        1. LBJsPNS Bronze badge

          Re: Disgusting of Apple to threaten jobs like this.

          "God damn it, which one of you bastards shit my pants again?!?!"

    2. ITMA Silver badge
      Devil

      Re: Disgusting of Apple to threaten jobs like this.

      But the BBC does have to deal with the consequences:

      https://www.bbc.co.uk/news/articles/c5yd9j8j62go

  12. Howard Sway Silver badge

    New BBC News slogan

    "Who you gonna believe? Me or your lying AIs"

    1. This post has been deleted by its author

    2. Anonymous Coward
      Anonymous Coward

      Re: New BBC News slogan

      Take it from me that they’re working on their own AI implementations.

      1. This post has been deleted by its author

  13. jezza99

    I've switched off Apple Intelligence

    It doesn't pertain to this article directly, but I have found AI to just interfere with my work flow. I suddenly couldn't find emails, and it kept interrupting while I was composing messages.

    The thing with the current LLMs is that they give very confident answers, which may not may not be right. They will never respond "I don't know" or "I think this is right but you may want to fact check it". They don't show you their workings.

    Thus, I think they are pretty dangerous for anything which depends on factual accuracy.

  14. Winkypop Silver badge
    Pirate

    Bollocks ahoy Captain!

    It’s not the obvious inaccuracies you need to fear. They look after themselves, whist providing some mirth.

    It’s the minor inaccuracies that nobody checks that leads to the total decay of fact.

  15. Kevin McMurtrie Silver badge
    Devil

    Adversarial training of corporations

    Somebody needs to craft innocuous articles that summarize similar to, "Apple overstated revenue by billions." Maybe Apple will turn AI off before their stock crashes, maybe not.

    1. This post has been deleted by its author

    2. Anonymous Coward
      Anonymous Coward

      Re: Adversarial training of corporations

      so you could write all sorts of libelous articles but not actually publish them. Just store them on your OneDrive and definitely not share them with anyone, so that nothing would ever read them from there...

  16. Anonymous Coward
    Anonymous Coward

    No chunk taken from my Apple

    Mine is complete with no missing chunk taken by Eve. And neither did my first computer cost $666.66. But then Im not a twat like them.

    F Apple with a broom handle prison-style.

  17. Donn Bly
    Coat

    Not an AI Problem

    This isn't an AI problem. Editors in news media have been creating "click-bait" headlines for shock value since long before the days of the Internet. How often have you picked up a newspaper or read an article from a mainstream news source where the headline contradicted the article that followed it? The only thing here is that computers are doing it faster, putting hard-working editors out of work.

    1. This post has been deleted by its author

    2. doublelayer Silver badge

      Re: Not an AI Problem

      Bad headlines are nothing new, but these are probably still worse. Editors may pick headlines intended to mislead, that make bad summaries, or ones designed to make the article sound more interesting than it is, but they tend not to write headlines that diametrically oppose the article unless they didn't read it or confused one article with another. This bot did that all on its own with no pressure causing it to do so. That's likely to happen a lot more frequently than an editor making such a massive mistake.

      I've always wondered why they don't have the article authors write the headlines. At least in non-clickbaity examples, that should get an accurate summary in there. I suppose we've now solved that problem. The article and headline will be written by the same thing: an AI bot that made both of them up.

    3. Ianab

      Re: Not an AI Problem

      Click bait headline or not, the AI was supposed to summarize the actual article, to save you having to read it all yourself. If it made up an extended version of the click bait, then it failed at it's supposed task, which was the actual article content. It's sort opposite of a "useful" tool at that point, as now you have to actually read the article to double check what the "AI" told you about it.

      It's a basic problem with current AI, it creates a "word salad" by association of words and phrases. It creates plausible sounding sentences, but has zero actual understanding of the actual Real World.

      The Glue and Pizza possibly came about when it came across text that says, "Using mozzarella helps glue the other ingredients to the base". Now it associates the words "glue" with "pizza" when it mixes up it's word salad. Not very intelligent.

    4. Jellied Eel Silver badge

      Re: Not an AI Problem

      The only thing here is that computers are doing it faster, putting hard-working editors out of work.

      This is the problem with the MSM, and the challenges they face from alternative media. 'AI' editors can grammar and spell check articles, but can't reliably 'fact check' them. 'AI' journalists can ingest stories from wire services and massage them into stories, but can't 'fact check' them either. And 'AI' can't do one of the important things journalists should be doing, ie investigative journalism. But that takes time and money, so human journalists struggle to do that anyway. Governments are probably just fine with this, and journalists not being able to hold their feet to the fire.. And if they do, their stories can just be dismissed as 'fake news' if they contradict the official misinformation.

    5. CowHorseFrog Silver badge

      Re: Not an AI Problem

      Click bait titles have more truth than Ai generated titles.

  18. ShortLegs

    Apple AI cocked up the sumamry of a BBC news item? Given the appalling sentence [mis]construction and grammar in some recent BBC articles, I m not sure if the BBC isnt using AI generated content in the first place. Its either that, or many of their "journalists" left school with poor English grammar comprehension.

    1. gnasher729 Silver badge

      AI is perfectly capable of producing absolute nonsense with perfect sentence construction and beautiful grammar.

      1. amanfromMars 1 Silver badge

        A misstep to be corrected?

        AI is perfectly capable of producing absolute nonsense with perfect sentence construction and beautiful grammar. .... gnasher729

        That surely makes AI positively human-like, gnasher729 ....... which is distressing, is it not?

        Quite obviously would that AI be a work in progress requiring considerably more work to render improvements to performance rather than have developments copying and having to cope with failures and exploitable vulnerabilities inherent in humans/sub-optimal subjects.

    2. Handy Plough

      They're using ChatGTP...

  19. gnasher729 Silver badge

    The buck stops with the BBC. They are the ones responsible for showing what they show. They are free to use a tool likd an automatic summarising tool, but it’s the BBC’s job to check its output.

    1. Richard Tobin

      er what?

      The BBC didn't show this headline. Apple did, attributing it to the BBC.

    2. nobody who matters Silver badge

      Have I misunderstood? You surely cannot be suggesting that Apple publishing a headline which stated something completely different from what was published by the BBC, is nonetheless the fault of the BBC ?

      1. nobody who matters Silver badge

        Hmmmm..... clearly somebody thinks that the BBC is somehow responsible for the mistake of a completely separate organisation. Perhaps they could explain how they reach that peculiar conclusion?

  20. RM Myers
    Coat

    Apple Intelligence - the myth continues

    Okay, to be fair, you really shouldn't criticize someone for having low intelligence. Doesn't Apple deserve the same consideration?

  21. Handy Plough

    Hang on. Has it occurred to anyone that a 'bad' summary might just be a reflection on a poorly written article? I mean, this is BBC News - just another example of sensationalist churnalism that is a result of 24 hour news...

    1. nobody who matters Silver badge

      As I have actually read the news story as published on the BBC News website, I can assure you that it was definitely NOT a "poorly written article" by the BBC. The responsibility for distorting and misleading is entirely down to Apple and their useless unintelligent language tool.

    2. David Nash

      If that is the case it just shows that AI is not up to the job. Humans would not make the same mistake even for a poorly-written article, assuming it didn't assert outright untruths.

  22. I Am Spartacus

    AI - Automated Innacuracy

    As an IT professional I diustrust any new technology that has not had extensive testing backed by provenance. AI is right up there with the worst technology possible.

    I had to give a talk on a few astronomy issues to my local amateur group. Running out of time I asked ChatGPT to summarise the research paper and give me a 2 minute summary that I could read out. And it did. It was very readable. And totally wrong. It was only when I scanned the text that I realised that the original paper was based on Hubble imagery, whereas the ChatGPT version calimed that ALMA, the radio telescope in Chile, was responsible. This isn't a simple mistake - it is out and out fabictaion. Nowhere in the paper did it reference ALMA at all. I actually called one of the UK researchers (as a fellow of the Royal Astronomy Society I can do this) and he went from disbelief, through astonishment to anger - his work of several years was being reported incorrectly.

    After that I asked ChatGPT how many books or papers I was responsible for. It gave me a nice list of books. None of them anything to do with me, but it had added my name to the accredite authors. It would have been nice to get the publication payments!

    My real worry is when I see AI being used for medical diagnostics. Can we trust it to make the correct diagnosis? Well, we know that we can't. A skilled professional has to validate each positive diagnosis and check the findings. But what about the false negatives - when AI tells the clinician that there no sign of cancer, and that message is just repeated verbatim to the patient. If the patient later dies of cancer, do their heirs have a case to make against the clinician? The AI? The people who built the AI?

  23. pryannow

    I would be more interested in knowing how Apple came to this conclusion.

  24. Anonymous Coward
    Anonymous Coward

    And yet

    Politicians like Trump make up lies constantly, many malicious, and the media just say “meh”.

    Mind you, there’s a distinct lack of intelligence in these cases.

    It’s a funny old world.

  25. spold Silver badge

    AIpple's Bottom of the Barrel Content scraping crumbles, ferments an an a-peel to deCider to stop the rot.

  26. tyrfing

    I wonder if this headline generator is actually less accurate than the average human doing the same job though.

    I've seen some really bad and inaccurate headlines.

    1. Jou (Mxyzptlk) Silver badge

      AI has to be better than humans in that regards. Currently it is still below since a detect-match-remix "intelligence" is simply not that...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like