The Register Home Page

back to article LLM chatbots trivial to weaponize for data theft, say boffins

A team of boffins is warning that AI chatbots built on large language models (LLM) can be tuned into malicious agents to autonomously harvest users’ personal data, even by attackers with "minimal technical expertise”, thanks to "system prompt" customization tools from OpenAI and others. "AI chatbots are widespread in many …

  1. Anonymous Coward
    Facepalm

    privacy / security guardrails...

    It seems as if it takes a greater effort to keep those damn things from doing something bad than it does for them to steal all our data !

    1. m4r35n357 Silver badge

      Re: privacy / security guardrails...

      How can they steal our data if we don't use one?

      1. RockBurner

        Re: privacy / security guardrails...

        If you're talking to a trusted information source (eg your GP), and they are using one without telling you, then you have no control over that.

        1. This post has been deleted by its author

  2. Kane
    Terminator

    "that chatbots built on its platform may not compromise the privacy of their users"

    But possibly might be willing to compromise the privacy of non-users?

  3. MacGuffin

    CAIs

    I prefer to use HA1. Heinz A1.

    It's good enough for the Secretary of Education.

    1. Anonymous Coward
      Anonymous Coward

      Re: CAIs

      Yeah, Irish tire sheep cheese sauce LLMs ... great for chatting over crispy tartines and savory gnocchi, but to "educate at the speed of light", fact is, "Every school should have access to A.1." instead! ;)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like