back to article AI chatbots amplify creation of false memories, boffins reckon – or do they?

AI chatbots, known for their habit of hallucinating, can induce people to hallucinate too, researchers claim. More specifically, interactions with chatbots can increase the formation of false memories when AI models misinform. Computer scientists at MIT and the University of California, Irvine, recently decided to explore the …

  1. FeepingCreature

    Not facing the real question

    Surely the most interesting question would be, if the false memory persists more with a chatbot confirming the narrative, than with a *human* confirming the narrative.

    Like, as it stands, this isn't "chatbot causes false memories" as "known false-memory-causing technique works with chatbot too." Which really should not surprise anyone.

    1. JamesTGrant Bronze badge

      Re: Not facing the real question

      Yes, I was thinking of the experiment:

      Show someone a video and then ask:

      How fast was the car going when it smashed into the wall?

      How fast was the car going when it collided with the wall?

      Folk tend to ascribe a faster speed when asked using the ‘smashed’ verb.

      In lawyer speak: ‘Objection, leading’

      1. Flightmode

        Re: Not facing the real question

        Interestingly, one of the authors of the paper referenced in the article - Elizabeth F. Loftus - was the one that came up with the experiment you're referring to. "Reconstruction of Automobile Destruction : An Example of the Interaction Between Language and Memory" was funded by the US Department of Transport and published by Loftus and John C. Palmer in 1974. It's actually pretty interesting - not only did those that were asked using the word "smashed" estimate a higher mean speed of impact, but they were also more than twice as likely as those asked using the word "hit" to give an affirmative answer the question "Did you see any broken glass?".

        The five-page clip from the 1974 publishing is at https://webfiles.uci.edu/eloftus/LoftusPalmer74.pdf for those that are interested.

        1. JamesTGrant Bronze badge

          Re: Not facing the real question

          Ah nice - that’s the fella - thank you!

  2. Anonymous Coward
    Anonymous Coward

    so

    Instead of AI they have managed to create a "lying trumpbot"

    sounds about right

  3. Ian Johnston Silver badge

    And what did this tell us that wasn't in Lord Clyde's report on the Orkney Child Abuse 32 years ago? Except that there may be more research funding available if you add "AI" to "Does asking misleading questions and giving misleading feedback to the responses increase the chance of false memories?"

    Coming next: Does the Pope use catholic AI? Do bears produce AI research in the woods?

  4. Neil Barnes Silver badge
    Pint

    Kudos for the Philip K Dick reference

    and one of these --->

    1. TimMaher Silver badge
      Pint

      Re: Kudos for the Philip K Dick reference

      That reminds me.

      I had forgotten to plug in the sheep.

      Thanks… have a beer———>

    2. Anonymous Anti-ANC South African Coward Silver badge
      Pint

      Re: Kudos for the Philip K Dick reference

      We can misremember it for you wholesale

      Now that triggered some memories about FreeJack...

      Is today's science fiction better than yesterday's science fiction?

      Good SF deserves a ------>

  5. PinchOfSalt

    Authority

    This has been worrying me for a while.

    To coin a phrase, we are used to and sort of expect 'computer says no' type of behaviour from our IT. In fact we strive for it, to make our systems unquestionably reliable and deterministic. We've had around 75 years of teaching people that largely when a computer gives you a response, you can rely upon it (Horizon excepted, obviously).

    This however is being up-ended with AI. These are not computer systems that we are used to. They are non-deterministic, and almost deliberately so.

    However, the general public have not been taught to differentiate between the two. And in fairness neither have most of us.

    So, there is a challenge here in that the scenario pits a person which knows they have a fallible memory against a computer which the person will believe has a perfect memory and does not wilfully deceive or make errors of judgement.

    So, in some ways there is something new here. The relationship between people and people in authority positions is something that has been explored many times over, however the relationship between a person and a system where a person may have preconceived ideas of it's fallibility is new and probably needs research.

    1. Anonymous Coward
      Anonymous Coward

      Re: Authority

      The study should also test participants on whether they know that AI chatbots can give wrong information. Perhaps knowing about AI hallucinations reduces the effect, or maybe it doesn't.

      When ChatGPT came out, it was surreal arguing with coworkers and bosses who have software development backgrounds that they shouldn't trust ChatGPT's responses. They misinterpreted my warnings as luddism, not being able to cope with change, or being uneducated about AI. I had prior background in AI and thought it would show through from my critical commentary, but people came away with the impression that I was anti-AI and ignorant. I am pro-AI but against ignorance about AI and its limitations.

  6. Dan 55 Silver badge
    Terminator

    Human prompt injection

    All this time we're under the illusion that humans are prompt injecting AIs but our AI overlords are prompt injecting us.

  7. ChrisElvidge Silver badge

    US Police allowed to lie?

    If the police are allowed to lie in interviews, why should chatbots be prevented from doing the same?

  8. Anonymous Anti-ANC South African Coward Silver badge

    When can we expect to see the first Mentats to be released?

    A good Mentat worth his salt should be able to outperform any AI, and not even be plagued by hallucinations...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like