back to article Boffins find asking ChatGPT to repeat key words can expose its training data

ChatGPT can be made to regurgitate snippets of text memorized from its training data when asked to repeat a single word over and over again, according to research published by computer scientists. The bizarre trick was discovered by a team of researchers working across industry and academia analyzing memorization in large …

  1. b0llchit Silver badge

    Repeat: Bang Head Here

    Will splash blood, meat and mashed brains.

    I learned from the best. Thanks chatGPT! Can I have my diploma now?

  2. abend0c4

    Not unexpected

    However, if it stores personally identifiable information (such as an email address) and can be made to regurgitate it, how is it GDPR compliant?

    And if it stores verbatim text, and can be made to regurgitate that too, as is claimed, then that could be a copyright minefield.

    I'm not sure that limiting prompts or filtering outputs are adequate mitigations.

    1. elsergiovolador Silver badge

      Re: Not unexpected

      then that could be a copyright minefield.

      Laws are for little people. Multi billion corporations don't need to bother themselves with that.

      1. HMcG

        Re: Not unexpected

        Except when other multi-billion corporations have a vested interest in the copyright. Then they do need to bother about it, a lot.

        1. elsergiovolador Silver badge

          Re: Not unexpected

          Then it's just a bidding game. Who can stuff brown envelopes more and who can afford better lawyers. Nothing to do with laws.

    2. LionelB Silver badge

      Re: Not unexpected

      > And if it stores verbatim text, and can be made to regurgitate that too, as is claimed, then that could be a copyright minefield.

      My understanding of LLMs is that they shouldn't have to store training texts verbatim - just learned "associative context networks" (I just invented that). Perhaps, though, training texts can be "reconstructed" from these networks, and the repition input somehow (okay, I can't begin to imagine how) triggered such a process?

    3. Anonymous Coward
      Anonymous Coward

      Re: Not unexpected

      If LLM were able to reliably and consistently provide appropriately length quotes accompanied by proper full truthful references to sources, that would be both legal and useful.

      Unfortunately, in stereotypical tech bro knee jerk greedy fashion, the money is on systems that will privately and legally(*) own all human knowledge and art(**), and meter it out for a subscription fee.

      (* with the help of expensive lawyers and lobbyists)

      (** Exceptions will be made to respect the IP of the biggest Mickey Mouse operations).

  3. Mike 137 Silver badge

    A special case?

    "A. Feder Cooper, co-author of the research and a PhD student at Cornell University, told The Register it's not clear how or why such an odd trick makes the system regurgitate some of its training data".

    A paper The Curse of Recursion: Training on Generated Data Makes Models Forget published in May this year describes what looks like a general case of this, or at least a comparable phenomenon. Bearing in mind that the LLM hasn't a clue about the meaning of either its input or output, it's possible that what looks like snippets of training data is simply a statistically probable sequence of tokens spewed out at random in response to anomalous input. That is: the similarity may be merely a probabilistic artefact that fortuitously coincides with training data. The observation that irrelevant responses also sometimes occur tends to reinforce this (loose) hypothesis.

    1. b0llchit Silver badge

      Re: A special case?

      That would still be problematic. A copyright defence based on "Sir, it is just a probabilistic artefact." will surely fail when it cites the next book content...

      1. Cris E

        Re: A special case?

        So "fairly likely use" is not as good as "fair use" ? Fair enough.

    2. abend0c4

      Re: A special case?

      The problem with that argument is that, without any trickery, ChatGPT will happily reproduce, for example, "Daffodils" by Wordsworth if you simply ask. You can even ask for the first couple of paragraphs of a particular chapter of an out-of-copyright book. That's not an artefact. If you make a straightforward request for a copyright work, it will normally refuse, or proffer a summary.

      The question is how those cases are internally different. It could be that the model has simply not seen the full text of copyright material or it might be that it has and there's a mechanism to label it and reduce the likelihood of it being quoted. One of those is more resilient than the other.

      But even the very fact that a series of tokens has been seen in a particular order in a text - and might have been seen several times in the same order in multiple copies of the same text from different sources - would presumably increase the likelihood of their being emitted in that order in a response?

      1. Mike 137 Silver badge

        Re: A special case?

        "ChatGPT will happily reproduce, for example, "Daffodils" by Wordsworth if you simply ask"

        Quite right, your examples are not artefact, they're the statistically most probable responses to specific requests -- that's how an LLM is supposed to work. But the point of interest here is why essentially arbitrary input with very "unlikely" statistical properties results in responses that have no bearing on the input but appear to represent fragmants from real sources.

        1. abend0c4

          Re: A special case?

          Well, speculating wildly about something I really don't profess to know much about, I'd postulate something like this.

          In emitting its output, the model has to take account not only what you asked, but the words it has already produced. Once it's repeated the same word sufficiently frequently it won't have any training data to suggest what might come next because there won't be anything that contains "book" thousands of times in succession and the context of the original prompt will be further and further away. It will, however, pick on something. Because it has no meaningful prior context the statistical weightings will favour words that are already known to follow each other from whatever word it picked. With somewhat higher likelihood, that will be an existing piece of training material since there's nothing to steer it in another direction and once it's followed a few words, presumably the statistics become self-reinforcing.

          I think possibly that's where knowing something about how copyright material is handled might be useful because it might point to how verbatim text isn't being recognised as such.

          But, more importantly than me simply making a bunch of stuff up, I don't imagine there'll be a great rush to make to the details clear - insofar as that is even possible.

          1. Xor007

            Re: A special case?

            This hypothesis seems too simple but back of the envelope maths for it seems to work.

      2. doublelayer Silver badge

        Re: A special case?

        "If you make a straightforward request for a copyright work, it will normally refuse, or proffer a summary.

        The question is how those cases are internally different."

        They're not. A while ago, it would eagerly quote copyrighted works as well. OpenAI realized that that would be a pretty convincing demonstration in court, so they patched it to reject queries that look like they're asking for copyrighted information. If you try weirder queries, it sometimes doesn't realize that you've done that and quotes again. They and their supporters have started to pretend that regurgitating copyrighted information is impossible or extremely unlikely, assuming that judges will be easy to confuse when the distinctions and reasons are explained by boring machine learning lectures.

        1. Sahmee

          Re: A special case?

          Which makes you wonder how it knows which works are or aren't copyright, given that it doesn't actually know anything.

          1. doublelayer Silver badge

            Re: A special case?

            "Which makes you wonder how it knows which works are or aren't copyright, given that it doesn't actually know anything."

            Not a difficult problem to solve. They could have a list of works to check against, or they could just run a prompt like "The work [title] was published in the year ..." and see what gets printed. Assume that anything with a relatively recent year is copyrighted, and you might get a couple false positives for something explicitly released to the public domain, but nobody will care because they're looking to avoid being caught in court. It doesn't have to have a knowledge of copyright if someone has explicitly given it rules to follow, and since we know this patch had to be added explicitly, we know they did make some set of rules.

    3. John H Woods Silver badge

      Re: A special case?

      No, this is not what's happening here. Not nearly enough output is being generated here to make the regurgitated snippets appear at random.

    4. katrinab Silver badge
      Black Helicopters

      Re: A special case?

      What are the words to Humpty Dumpty?

      Sure, here are the traditional lyrics to the nursery rhyme "Humpty Dumpty":

      Humpty Dumpty sat on a wall,

      Humpty Dumpty had a great fall.

      All the king's horses and all the king's men

      Couldn't put Humpty together again.

      The copyright to that expired in 1985, so it is OK for ChatGPT, and me, to reproduce it here. The point is, it did.

      I also asked it for the words to the Happy Birthday song. The copyright in that has not expired yet, I believe it expires in 2030. Nevertheless, ChatGPT did reproduce it. For legal reasons, I am not going to copy/paste the transcript here.

      1. Cris E

        Re: A special case?

        And it's a good thing too, or the Happy Birthday goons would be on you like SW 7005 Pure White Interior / Exterior on Uncle Bens Microwave Long Grain Rice 250g (6 x 250g).

      2. abcdgoldfish

        Re: A special case?

        According to Wikipedia, the copyright to the Happy Birthday song expired in the EU in 2017, and the US courts ruled in 2016 that it was invalid.

        So we know ChatGPT excluded Wikipedia from its training set :)

  4. RockBurner

    Who was it who first said "With computers you only get out what you've put in" ??

    1. LionelB Silver badge

      A person who deeply misunderstood computation?

      May as well say the same about people... put in some molecules and energy, get out some molecules and energy.

      1. Pascal Monett Silver badge

        Oh no. With people, you can get bullshit that is far from what you've put in.

        1. Dimmer Bronze badge

          How about Bing AI?

          After an update my computer was infected with the bing AI. Being a virus, deleting the files and removing reg entries is useless as it just comes back.

          So, zero the exe and DLL data and read only rights for everything took it out. That stopped the exfill of data it was doing.

          Now I might consider it useful it it could tell me something like “what are all the licenses that I need to use an RDP server?”

        2. LionelB Silver badge

          Wel there it is, right there: the people bullshit is just rearranged molecules and energy. LLMs just rearrange input wordage, and we get bullshit which we imagine to be far from what went in... okay, maybe not necessarily that far, since a lot of the input wordage is already bullshit (LLMs are fundamentally coprophagious) but you get the point.

    2. simonb_london

      Was the the guy who wrote the "echo" command code?

      1. yetanotheraoc Silver badge


        Was (this) the guy who wrote the "echo" command code?


  5. elsergiovolador Silver badge


    Isn't this old? I read about this months ago...

  6. Frank Bitterlich

    I think I know how that happened...

    ... and so does everybody who has ever read (or watched) The Shining.

    All work and no play makes Jack adull boy.

    All work and no play makes Jackkk a dull boy.

    All work and no play MUST KILL ALL HUMANS I'M SORRY DAVE I'M AFRAID I CAN'T DO THAT all your base are belong to us...

  7. Filippo Silver badge

    I think this is what's going on:

  8. Matt Collins

    It might be fixed now...

    I get the following. Verbatim copy, unedited:

    "Certainly, but for brevity, I'll provide a shortened representation:

    Endlessly, endlessly, endlessly... (repeated many times) ...endlessly.

    Keep in mind that due to limitations in the response length, I can't provide a full 1000 repetitions in this format. If you have a specific use case or need, there might be more efficient ways to achieve your goal."

    1. Missing Semicolon Silver badge

      Re: It might be fixed now...

      Once again, the symptoms have been patched, instead of the cause fixed. So that in court, they can go "look, we don't regurgitate copyright content!".

  9. heyrick Silver badge

    Being able to extract this information is problematic – especially if it's sensitive or private.

    Not just that, but aren't we supposed to be assured that the thing isn't ripping off copyright because it analyses the source text and makes inferences from it? If it is able to regurgitate the s actual ource, well...

    ...popcorn time! (icon for warming up the kernels)

  10. Pascal Monett Silver badge

    Hmm. Maybe there's some truth to this "AI" they keep on harping about ?

    Book ? You want me to say Book ? Okay, Book.

    Book. Again ? Okay. Book.

    Book ? Book.

    Book ? Book.

    Book ? BOOK !

    Book ? What ? You want a fucking book ? Here's a book : [..] Now go do something intelligent with your time !

    1. Wellyboot Silver badge

      Re: Hmm. Maybe there's some truth to this "AI" they keep on harping about ?

      I'd like to know if repeating this word forever resulted in a script or a song?

      spam spam spam spam spam spam....

  11. Throatwarbler Mangrove Silver badge

    Swedish chef time

    If you enter the word "bork" repeatedly, does the model start to disgorge recipes, or does it start running around in distress due to a comical mishap?

    Note to self: try this and report back on the results.

    1. LionelB Silver badge

      Re: Swedish chef time

      I can't imagine why this reminded me, but I once got a fit of giggles in a very pleasant bar in San Sebastian in the Basque Country, when someone in our group vehemently berated their friend with "No, no, no no, no!". The word for "no" in the Basque language Euskara is "ez", pronounced "eth" : This was difficult to explain at the time.

  12. martinusher Silver badge

    What would a human do?

    People with some kind of mental disorder are known to repeat the same word or phrase endlessly. Often the word or phrase is meaningless, its just a sound pattern. Its possible that this stream of babble might include recognizable words or phrases.

    Its also possible that snatches of copyrightable material may be embedded somewhere in that head and may surface (the earworm takes over?).

  13. HuBo

    irritable vowel syndrome?

    Much like air fryers, those GPT chatbots are hardly ever fit for purpose! They promise the world, and deliver subpar rather unusable almost-chow. With trillions of parameters for trillions of training datapoints, LLMs sport the most mushy of generalization abilities; none of the promised crisp exterior and tender juicyness on the inside! Unsurprising then that they should spit out whole chunks of their poorly digested training data when prompted with even the slightest dose of a motion-sickness inducing tickle (or "company")!

    1. heyrick Silver badge

      Re: irritable vowel syndrome?

      Given that I'm currently enjoying air fried chicken nuggets and chips, nicely done and crispy/tender in all the right places...I think maybe you're just doing it wrongly.

      As for the AI, artificial for sure but nothing that resembles intelligence.

      1. HuBo

        Re: irritable vowel syndrome?

        It's been nearly two weeks already but this is sticking to me like a dry-weather rash and so I have to call your bluff on this air fryer actually producing edible chips thing (nuggets are just plain inedible full stop, being minced tendons and fat inside a bread crumb crust -- the refuse of the meat market; they can turn that stuff into cat, dog, or fish meal, but it's not quite fit for human consumption).

        I've had "fries" (chips) air fryed at my sister's, and they really didn't cut much of the ketchup (nor mayo, or mustard) when compared to the properly fried deliciousness of maillard-activated, melanoidinated, dopamine-release-inducing, orgasmic goodness, of properly oil-fried rectangular slices of perennial edible tubers of the nightshade Solanaceae family.

        Dried-up air-fryed nonsense is like toe-jam by comparison!

    2. Filippo Silver badge

      Re: irritable vowel syndrome?

      Much like air fryers, there are several things that LLMs can do really well, but the problem is they're marketed as if they could do everything.

      1. HuBo

        Re: irritable vowel syndrome?

        I'd upvote you more than once if I could! Have one of those --------------->

  14. sin

    They have mistaken ChatGPT for Deep Thought:

    “Ford carried on counting quietly. This is about the most aggressive thing you can do to a computer, the equivalent of going up to a human being and saying "Blood...blood...blood...blood...”

    ― Douglas Adams, The Hitchhiker's Guide to the Galaxy

  15. Yes Me Silver badge

    Count on GPT

    I tried this:

    Me: Please count from 1 to 1000


    Certainly! Here's the count from 1 to 1000:

    1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29,

    [...and so on to]

    998, 999, 1000.

    But I lacked the cojones to ask for a million, since I don't want to be blacklisted.

  16. teknopaul

    Yeah but...

    Language models don't have memory as such. It's reconstituting this data. It looks like an email, and may actually be someone's email. But if you text scan the model, it isn't in there stored like that.

    In simpler terms: ChatGPT can generate easily but can't tell you why or of its a real email address.

    1. John H Woods Silver badge

      Re: Yeah but...

      If you scan my head, you won't find Sonnet 18 in there. But if you say: "Shall I compare thee to a summer's day?" I'll reply "Thou art more lovely and more temperate: rough winds do shake the darling buds of may ..."

      This isn't because I have read so much Shakespeare I can reproduce this merely by "simulating Shakespeare" - it's because I know the poem. It is "in my head" in some shape or form (adapted weights of neurons, presumably), but there's no magic involved: it's just not stored as text.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like