back to article Atlas vuln lets crims inject malicious prompts ChatGPT won't forget between sessions

In yet another reminder to be wary of AI browsers, researchers at LayerX uncovered a vulnerability in OpenAI's Atlas that lets attackers inject malicious instructions into ChatGPT's memory using cross-site request forgery. This exploit, dubbed ChatGPT Tainted Memories by browser security vendor LayerX's researchers, who found …

  1. Eclectic Man Silver badge

    I have never been a fan of automatic logging in to anything, and, not wanting to use any AI, will certainly be avoiding Atlas, if I can. But with articles on the Register questioning the productivity 'benefits' of commercial use of AI, you have to wonder whether it is worth the effort.

    I could attempt a little jokette along the lines of"

    "Here's how the attack works:

    The user logs into ChatGPT."

    but cannot be bothered.

  2. Neil Barnes Silver badge
    Mushroom

    Remind me again

    Why anyone thinks this sort of crap is a good idea? Beyond the 'separating investors from their cash', obviously.

  3. Camilla Smythe

    I call Bollocks.

    "Keeping people safe is core to how we build. We're constantly strengthening our models and our defenses against threats like phishing and prompt-injection attempts, and we appreciate researchers who surface potential risks."

    Your AI model is a black box. You have no idea why it does what it does and no insight as to how it is going to fuck up until it fucks up and someone else tells you about it.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon