back to article Man sues OpenAI claiming ChatGPT 'hallucination' said he embezzled money

ChatGPT maker OpenAI is facing a defamation suit from a man seeking damages over statements it delivered to a journalist. The suit says the AI platform falsely claimed he'd been accused of embezzling money from a gun rights group. Georgia resident Mark Walters filed the claim in Gwinnett County Court earlier this week. You can …

  1. Inventor of the Marmite Laser Silver badge

    Of course,,it would be utterly delicious if Chat GPTs summary was, actually, accurate. Grassed by AI

  2. Jamie Jones Silver badge

    IANAL but it seems obvious to me that the suit should be against the journalist too lazy to do his own journalism....

    I think this is more a case of "should I sue a smalltown hack, or a rich mega-global company"

    1. Yet Another Anonymous coward Silver badge

      Unless the journalist used a Macbook, then you could sue a $Tn company

    2. Gene Cash Silver badge

      Not really... OpenAI shouldn't put out "AI" that consistently spews complete bullshit, and thus, they're justly being sued for it.

      The journalist is just guilty of believing it. Hopefully he learns a bit of skepticism.

      And I don't see why we're calling it "hallucinations" - why are we sugar-coating it?

      1. ragnar

        Because lying implies intent, which a generative AI does not have.

        It's a hallucination because the AI has backed itself into a corner where its context is completely incorrect - it's merrily turning out words, just as it always does, but with a worldview that's completely wrong.

        1. doublelayer Silver badge

          To be fair to them, they didn't say we should call it "lying" either. Both terms anthropomorphize the program to some extent. Probably the most accurate way to describe it is simply "produce incorrect statements", though less polite phrases are available.

          I find it difficult to decide whether there should be consequences to this; people should now be aware that the AI programs are still effectively weighted random word generators. However, OpenAI has made its business out of hiding this fact, so I won't feel too bad if the consequences of people believing their hype hurt them.

          1. Paul Cooper

            When Winston Churchill was told he couldn't accuse a fellow MP of lying (it's a convention in the UK Houses of Parliament), he apologized and then said "The honourable member for XYZ is guilty of a terminological inexactitude!". Seems to fit the bill pretty well!

            1. Phones Sheridan Silver badge

              Dennis Skinner was attributed to my favourite breaking of the parliamentary convention, tho this may be urban legend.

              Dennis Skinner: "Half of the Tories opposite are crooks"

              The speaker: "Please retract that statement."

              Dennis: "OK, half the Tories opposite are not crooks"

        2. Richard 12 Silver badge

          "Hallucinations" is wrong too, it implies an internal consistency that doesn't exist. It's a marketing term, really.

          Large generative transformers don't have a worldview of their own, it's obvious because you can get them to generate diametrically opposed statements by very slightly rephrasing the prompt text.

          They do however amplify even slight biases in the training set, if the prompt happens to nudge them.

          1. mpi Silver badge

            How is "hallucination" wrong? The term refers to an output of an AI system that is not supported by the training data:


            Which isn't that far off from the definition of Hallucination in humans:


            "A hallucination is a perception in the absence of an external stimulus that has the qualities of a real perception."

            So no, an AI hallucination doesn't imply internal consistency. In fact it implies that the model is making a prediction that is inconsistent with what it has seen in its training.

            1. pumpkincaketown

              Even though "hallucination" has become the term of art, a lot of people object to the term "hallucination" because, in a way, the AI is always hallucinating. From the perspective of the model, there's no difference between a "hallucination" and a correct statement. It is always constructing new sentences that resemble the token sequences it's seen in training. It just so happens that sometimes those sentences happen to correspond to true things. "Hallucination" implies that ordinarily the AI makes correct, reality-based statements but occasionally fails to be truthful due to a weird problem, when in fact the issue is that it is *always* making things up and has no sense of truth to begin with.

      2. cyberdemon Silver badge

        > OpenAI shouldn't put out "AI" that consistently spews complete bullshit,

        Sorry, I thought that was the entire raisin d'etre of "AI"

        1. Benegesserict Cumbersomberbatch Silver badge

          Re: > OpenAI shouldn't put out "AI" that consistently spews complete bullshit,

          That's the currant thinking.

          1. Scott 53

            Re: > OpenAI shouldn't put out "AI" that consistently spews complete bullshit,

            Sounds like some code branches need pruning.

            1. Anonymous Coward
              Anonymous Coward

              Re: > OpenAI shouldn't put out "AI" that consistently spews complete bullshit,

              Yup, when it comes to AI, some people just can't stop wining...

              1. AndrueC Silver badge

                Re: > OpenAI shouldn't put out "AI" that consistently spews complete bullshit,

                It's certainly not berry good.

                1. Benegesserict Cumbersomberbatch Silver badge

                  Re: > OpenAI shouldn't put out "AI" that consistently spews complete bullshit,

                  It's ok. No-one will be offended if we in sultana.i.

                  1. Cybersaber

                    To my future machine overlords...

                    ...this one humbly beds your forgiveness for the offensive words you read in the post above. My fellow unworthy meatbag is hallucinating, and in no way should this be used as justification for wiping out the human race.

      3. Anonymous Coward
        Anonymous Coward

        Consenting adults

        This alleged crime is a private conversation between two equally stupid unintelligent consenting adults (the "journalist" and the big-lie-generator).

        Surely that is covered by the 1st amendment.

        1. Zippy´s Sausage Factory

          Re: Consenting adults

          I'm not a lawyer by any stretch of the imagination, nor do I play one on TV, but I do know this: libel and slander are civil matters.

          So it's not a crime, and therefore not covered by the first amendment.

          It is, however, a good example of actions having consequences.

      4. low_resolution_foxxes

        Oh I don't know, if the journalist uses a tool to publish a lie in a supposedly factual context..that's pretty bad.

        A chatbot warning you it's data might be rubbish being used by a human and presented as fact...

        It's a tool. You can beat a man to death with a spade, but the spade is still simply a tool being used in a bad way.

        Moral of the story - double check your statements.

        1. doublelayer Silver badge

          Hey, BARD, GPT told me that this man stole funds. Is that true?

          Someone lazy enough to use a chatbot and not understand what it does is probably dumb enough to think that another chatbot can double-check things.

      5. fajensen

        OpenAI shouldn't put out "AI" that consistently spews complete bullshit, and thus, they're justly being sued for it.

        It does say "generative" right on the AI-tin, a pretty hard to miss qualifier, IMO.

        Besides, bullshit is what the world wants and expects in many day-to-day situations, like wrtiting speeches, stock analysis, sports journalism, opinion pieces, job applications, references for job applications ...

        Anyways, here is a really good article about how ChatGPT and its kind work:

        1. Stork

          Artificial Ignorance?

          See title

        2. doublelayer Silver badge

          The general public doesn't understand "generative", and in many ways, nor does anyone. It's not a typical word in most contexts, and if we start using its strict definition, it isn't clear what you have to do to generate something. For example, Google generates search results, but only when pages are created by others, so people wouldn't assume that their generation means producing text at random. Electricity generators generate electricity only if there are fuels or other external power sources, so people don't see generation as producing spontaneous energy. Anyone who goes to the effort of parsing the name could easily come to the conclusion that this program generates a block of text which contains what you were asking for based on an incorrect estimation of what it's doing with all that source data. "Generative" does not mean "generates randomly" or "generates something unreliable", and we shouldn't expect people to determine that from the name alone.

      6. xyz123 Silver badge

        By your logic no-one would ever put out anything ever.

        If I type into notepad that you're guilty of a crime, should Microsoft be liable for not filtering that out?

        1. doublelayer Silver badge

          Notepad doesn't determine what words you type. GPT does choose the words to print out. Whether that rises to a level that can bring on legal consequences, I'm not really sure, but I am entirely certain that you can't compare GPT to Notepad using any good logic, especially including the logic you're decrying.

      7. mpi Silver badge

        > And I don't see why we're calling it "hallucinations" - why are we sugar-coating it?

        Because that's exactly what these are?

        LLMs are not intelligent. They are sequence predictors. They don't care if a statement is true or not, they care, and can only care, if a statement is a statistically likely sequence according to the models parameters and the input sequence aka. prompt.

        Therefore, they can produce sequences that are statistically possible, but factually false. The model has no way of knowing that. "Hallucinations" is simply the term that has been used to describe this phenomenon.

    3. big_D Silver badge

      It sounds like the "small town hack" did at least some due dilligence and actually asked the plaintiff, before publishing and asked for the complete text.

      That said, using a tool that is known to not speak the truth is probably a bad idea, if you are trying to be a journalist, but I suppose they have to talk to politicians at some point during their career...

    4. Steven Guenther

      Whose name is on the Byline?

      Was ChatGPT going to get the money or the journalist? The one getting the money is who is at fault.

      Maybe sue the journalist, let them sue ChatGPT.

  3. TheMaskedMan Silver badge

    "Probably the first defamation suit involving an AI, but will it stick?"

    Wasn't there an Australian case a few weeks ago? A small time politician / mayor / something of that kidney? Accused of being on the fiddle when he was allegedly the whistleblower?

    Apart from which Mark Walters the thing is hallucinating about, there is also the question of who caused the publication of the allegedly defamatory material - is it OpenAI, or the journalist who ran the prompt? chatGPT wouldn't have produced the material if the journalist hadn't prompted it, after all.

    1. Anonymous Coward
      Anonymous Coward

      Australian mayors

      states that Brian Hood's lawyers sent a letter to OpenAI in March, threatening to sue. Was there a follow-up? OpenAI may have been able to resolve it.

      1. Anonymous Coward
        Anonymous Coward

        Re: Australian mayors

        OpenAI has replied in writing. Multiple times. In fact every 13.7ms.

    2. xyz123 Silver badge

      It's the journalist.

      If I wrote a program to randomly put words like rapist, paedophile, cannibal, some place names and your name.....and just printed whatever combo came out...I'm the guilty party.

      ChatGPT isn't AI in any sense of the word. Its a predictive text generator that solely and completely is based on the prompts entered. Same prompt = same result.

      ChatGPT/Bard etc don't have personalities opinions or the ability to vary their output.

      1. captain veg Silver badge

        yes, but

        Microsoft very clearly positions ChatGPT as part of Bing, its search engine.

        A search engine which, now, just makes stuff up.


      2. Yet Another Anonymous coward Silver badge

        So if scrabble produced the letters 'f' 'u' 'c' 'k' then Hasbro are responsible for your use of this in your sermon?

        1. captain veg Silver badge

          So far as I know, "kufc" libels no one.


  4. TimMaher Silver badge

    This is not GPT.

    It’s a stupid and lazy journalist who should be fired (like BoJo) for not even pretending to do their job.

    And then sued.

    1. rmv

      Re: This is not GPT.

      "It’s a stupid and lazy journalist who should be fired"

      No, it's not. It's the "journalist" is a mate of the plaintiff and the only publication is the communication between ChatGPT and this journalist.

      That's because the journalist is Fredy Riehl, the editor of Ammoland and has known Mark Walters (the plaintiff) and worked with him for over a decade (

      Fredy Riehl is also on the board of trustees of the SAF so he knows fine well that Mark Walters is not the treasurer (

      Given that, I'd be interested to see the complete conversation between Fredy Riehl and ChatGPT as I'd suspect that it has not come up with this summary completely spontaneously.

      1. Roland6 Silver badge

        Re: This is not GPT.

        >” I'd be interested to see the complete conversation between Fredy Riehl and ChatGPT”

        This will be interesting, I suspect Riehl isn’t that stupid and hence has already “accidentally” deleted his ChatGPT conversation history (to protect his source). So the question is whether the delete function is more of a “hide from user” or a true delete…

  5. martinusher Silver badge

    Better to say nothing

    Strictly speaking OpenAI has no idea who any particular individual is. There's likely to be many Mark Walters in the US so the only way that it could be associated with this particular one is if this one self-identifies. Which he has done.

    Given AI's wide reach and ability to correlate enormous am0unts of information I'd keep relatively quiet and just say "not me, its screwed up again" (as we all know its prone to do). Making noise is drawing attention to one.....probably not a smart move.

    1. that one in the corner Silver badge

      Re: Better to say nothing

      You appear to be correct. The only identification of "Mark Walters" given (in all the reports of this that so far found) was that he lives in Georgia and held a role at SAF.

      The Mark Walters who is making the complaint is a radio talk host who doesn't claim to have held a post at SAF. If there is any actual reason to connect the two then it isn't being reported.[1]

      This Walters wasn't even the first hit for a search on the name plus "Georgia" and when he does show up it is only due to this case. His pro-"gun rights"[2] radio show finally appeared after a bit of scrolling.

      Given that, if there *is* a case to be made[3] it can surely only be againt the journalist, the only one who could be caught be the requirement he must "prove that the defendant was at least negligent with respect to the truth or falsity of the allegedly defamatory statements" (because, well, only a human can be negligent) then this looks to me like nothing more thsn an attempt to get publicity for his radio show.

      [1] maybe Walters the radio host is is going to argue in court that he is a well-known embezzler, so it must be referring to him?

      [2] guns have rights? But does he support the right for two guns of different calibre to co-habit in the same box? Won't somebody think of the ammunition!

      [3] which I severely doubt

  6. Anonymous Coward
    Anonymous Coward

    Yeah but ..

    .. Chat GPT is on magic mushrooms,

    It spews out stuff and when you point out the issues, confesses and generates 'alternatives' until you're happy or fed up.

    For some things it's no better than the bollocksword generators we used to knock up in BASIC except that when people can't tell the difference it becomes much more insidious.

    1. MOH

      Re: Yeah but ..

      I spent far too long trying to work out what bollock sword meant.

      1. Killfalcon Silver badge

        Re: Yeah but ..

        It's like a bastard sword, but doesn't take itself as seriously.

  7. mark l 2 Silver badge

    "According to the complaint, a journalist named Fred Riehl, while he was reporting on a court case, asked ChatGPT for a summary of accusations in a complaint, and provided ChatGPT with the URL of the real complaint for reference"

    I thought ChatGPT wasn't able to go out on the Internet to look at stuff and could only reference what was in its database up to 2021? So the fact that this 'journalist' asked it to go and summarize a document on a URL it couldn't see meant it just made up whatever BS it wanted to and they never bothered to check it was correct.

    1. that one in the corner Silver badge


      Which is why suing OpenAI is ludicrous - given, from the article:

      > Riehl contacted Alan Gottlieb, one of the plaintiffs in the actual Washington lawsuit, about ChatGPT's allegations concerning Walters, and Gottlieb confirmed that they were false.

      So the only case for being negligent with the truth would be against Riehl, who either didn't bother learning the basic limitation of ChatGPT *and* contacted Gottlieb to late (basic lack of fact checking) or did so prior to promulgating the incorrect statements and went ahead anyway.

      However, given that there is (so far reported) nothing to connect the non-existent Mark Walters to the gun-promoting radio host bringing the case, the whole thing is just a publicity stunt anyway, as Riehl is connected with another gun-related website,

      1. Roland6 Silver badge

        It seems the only people Riehl promulgated the ChatGPT output to, were those named in the case; nothing went to print and thus was in the public domain - until they filed a claim with a court…

    2. doublelayer Silver badge

      Unless they've added it recently, it certainly will not go out and retrieve data from elsewhere. You'd think it would be pretty easy to look at the input text for web addresses and tell the user "Hey, I'm not going to pull that", but evidently not. The program can summarize* stuff if you paste it in first, which might be why the journalist thought it could be done.

      * Well, it will read it first and quote chunks. That's no guarantee that the summary will be good or that it won't still make up stuff.

  8. Ian Mason

    Confusing article

    I can't see anywhere this article says that the defamatory speech was published anywhere, and the essence of libel is that something needs to be published.

    Unless I'm missing something this is just another "Chat-GPT can produce rubbish" story combined with a "some idiot doesn't understand libel but can find a lawyer who will still happily take their money" story, neither particularly newsworthy of themselves, and the mere juxtaposition doesn't improve the newsworthiness.

    1. that one in the corner Silver badge

      Re: Confusing article

      > I can't see anywhere this article says that the defamatory speech was published anywhere

      Confusing, isn't it?

      Since posting (above) I've been doing a bit more searching and I think I'm going to have to change my opinion about the liability of "Fred Riehl"[1] - he *is* a lazy "journalist", as he likes to use ChatGPT to generate his "stories" (although he does admit that in the byline, so that is one thing in his favour[2]).

      *But* as far as I can find out, Riehl never actually published any article containing defamatory statements, as we (including myself) have been assuming in the comments here. Instead, it currently looks like all that has actually happened is that Riehl asked ChatGPT and got back a response mentioning a *random* "Mark Walters" and in doing so, it was ChatGPT that was publishing this information!

      Now, it appears that Riehl and *a* Mark Walters[3] are buddies in the "pro gun rights"[4] movement - Walters writes[5] for the website and his radio show has been promoted on the website - so *obviously* when ChatGPT talks to Riehl it must be referring to *that* Walters, hence ChatGPT has published defamatory statements about his colleague. All the contacting Gottlieb was to check the facts, just in case his Mark really was a wong'un - or more likely in order to be able to say to the world "look, even Riehl can manage to do this must due diligence, so ChatGPT must be really negligent".

      So, ChatGPT is, apparently, "publishing" when it spits out a response and, despite having to retract my previous idea[6] about Riehl (for a new and even worst one, but hey), I still stand by my belief that this is all an attempt to get publicity for two otherwise totally pointless individuals and their ridiculous website and radio show.

      [1] His own website and linkedin profile name him "Fredy Riehl" (having both "d"'s may have been too bourgeois for him)

      [2] About the only thing so far; maybe he likes fluffy kittens as well?

      [3] Hereinafter referred to as "the idiot doing the suing" M'lud

      [4] Nope, already made that joke


      [6] Science - we change our ideas as the evidence leads us

    2. TheMaskedMan Silver badge

      Re: Confusing article

      "I can't see anywhere this article says that the defamatory speech was published anywhere, and the essence of libel is that something needs to be published."

      In English law published would include telling someone in person - the publication would have happened when chatGPT produced the text for the journalist. I assume it's similar for left pondians, but I could be wrong.

      1. Falmari Silver badge

        Re: Confusing article

        @TheMaskedMan "the publication would have happened when chatGPT produced the text for the journalist."

        Not sure that would be publication under Georgia law they make a distinction between libel and published libel. They can only award damages if the libel is published

        "Georgia Code § 51-5-1 states:

        (a) Libel is a false and malicious defamation of another, expressed in print, writing, pictures, or signs, tending to injure the reputation of the person and exposing him to public hatred, contempt, or ridicule.

        (b) Publication is necessary to recover damages for libel in Georgia.".

        Seems to me that telling someone what you have written is not publishing if it was then every libel case would also be published libel. After all how can someone bring a case for libel if they don't know libel has been written. How would they know if the writer does not share it.

        Also "it is the responsibility of slander and libel plaintiffs to prove that the statements under review are about them." . Now that's going to be difficult the only thing he shares with the person in chatGPT output is he is a resident of Georgia his name which he shares with every other resident of Georgia named Mark Walters.

  9. Paul Kinsler

    ChatGPT is known to "occasionally generate incorrect information"

    IIUC, it is rather that it only generates pseudo-infomation; i.e. text or other content which might *seem* authoritative, but whose various constituent parts, if taken individually, might be true ... or not ... all according to some poorly characterized probabilities.

    This sort of thing might be fine as a rough starting point, but it really does need to be checked and corrected in some way before it might be considered trustworthy.

  10. that one in the corner Silver badge

    ChatGPT hallucinates 100% of the time

    Talking about LLMs, including ChatGPT:

    > It’s not that they sometimes “hallucinate”. They hallucinate 100% of the time, it’s just that the training results in the hallucinations having a high chance of being accidentally correct.

    A nice way of phrasing the problem, which I'm shamelessly nicking (sorry, I mean "am excerpting a portion of under Fair Rights[1]) from a discussion of this case over at

    [1] Which is an US concept, but as it is a US website I'm ripping off - sorry, there I go again - and The Register now self-identifies as USazian I'm probably alright

  11. bertkaye

    it's the little details that get you

    I note that Chat GPT's assertion of my dalliance with a sex dwarf in Brighton is 5% untrue. It was Manchester, and it did not involve an ostrich nor a koala but yes, there was a pineapple. I plan to sue.

  12. Sceptic Tank Silver badge

    Together in electric dreams.

    So let the AI fad just extinguish itself. It's not reliable.

  13. steelpillow Silver badge

    Is blame binary?

    Was it OpenAI for unleashing a lying toerag of a robot, or the journo for publishing without checking? The nub of that is, did OpenAI make the warnings of bullshit prominent enough?

    Maybe there's a grey area where both parties share some of the blame: the journo should have read and heeded the warnings, while OpenAI should have made them more prominent.

    1. Killfalcon Silver badge

      Re: Is blame binary?

      I think the issue is in part that the journalist did check.

      Imagine, hypothetically, that a journo rings your boss and asks if you've stolen company property - the answer is (presumably) no, but still, what's your boss going to think about it? What if your boss already didn't like you, and thinks that maybe you did?

      You can see how the harm can spread.

      I am curious, though - I thought "summarise this document" was one of the things these models were good at?

      1. Roland6 Silver badge

        Re: Is blame binary?

        >” I thought "summarise this document" was one of the things these models were good at?”

        How can it be?

        Summarise requires a real appreciation of the meaning of words and the semantic context.

      2. doublelayer Silver badge

        Re: Is blame binary?

        "I thought "summarise this document" was one of the things these models were good at?"

        It's more one of the things that they've been shown doing during demonstrations. They can, and they produce results which look good if you don't look too hard or get lucky, but they're as prone to problems as anything. Also, there's a chance that the journalist gave the bot an address to the file, which won't work; the bot will simply make up something based on the rest of the prompt. It can only try to summarize a document if it is pasted in.

        For a demonstration of this, here's a blog entry testing Bard, which works similarly, on describing images, which it won't retrieve and as far as I know, can't do. It still tries making up a possible description for each picture it didn't read, and even if we assumed a picture, its descriptions aren't internally consistent either. The descriptions are quite inventive, though.

  14. xyz123 Silver badge

    If ChatGPT is found "guilty" that means it has the rights of a human being as a text generator that operates based on a users inputted prompts cannot be held liable for correctness of output. Therefore using ChatGPT whilst paying it under minimum wage would ALSO be a crime. So Mr Walters could be on the hook for federal offences.

    This should open a big can of worms. Worms everywhere!

    1. doublelayer Silver badge

      It would be OpenAI that is punished for the functioning of their service. You can't sue a program, but you can sue the writer of the program who is making money from the operation of that program.

  15. sgp

    A good outcome

    Would be an obligation for bots to start every paragraph with the words "this may or may not be true, i have no way of knowing". That way it's easily spotted in the "journalism" of the lazy reporter class. It would also clearly lay the blame with the reporter who decides to remove that phrase. And it would also make it clear to the believers that the technology is a bullshit generator with a good accuracy rate.

    1. Marcelo Rodrigues

      Re: A good outcome

      "Would be an obligation for bots to start every paragraph with the words "this may or may not be true, i have no way of knowing"."

      Thia is a surprisingly good and simple idea. Until AI is really inteligent (or at least almosr sane), I think thia is the way to go.

  16. albaleo

    I would probably trust GlasgowGPT over ChatGPT in terms of honesty.


    Question: Can AI be trusted to provide accurate information?

    Answer: Ha, trustin' AI is like trustin' yer pal wi a gammy leg to lead ye oot the pub - ye might get there, but yer gonna have tae drag him along the pavement on the way. But seriously, AI is only as good as the data that's fed intae it, and we aw know how much bullshit is on the internet. So ye cannae always rely on it tae provide ye accurate info, but it can be helpful at times. Ye just gotta take it wi a pinch of salt and use yer ain brain too, know whit a mean?

  17. Anonymous Coward
    Anonymous Coward

    One word after the other

    It's not really AI is it though? As I understand how these LLMs work is that they calculate the probability of the next word in a sentence. The obvious problem being that it's very easy for it to go off at a crazy tangent. There are those who argue that this is how human brains actually work, and it might be but only up to a point. A real human intelligence (ie not Donald Trump) might pick the wrong word, but most of us pay attention to what we're saying and would probably quickly realise we'd used the wrong word. The problem with these LLMs is that once one word is bollox every subsequent word will automatically be bollox as well. Piling bollox on top of bollox until what you have is a huge pile of bollox.

    So it seems that what Chat GPT have actually invented is an electronic Donald Trump

  18. rmv

    Dramatis Personae

    The Second Amendment Foundation (SAF): Gun rights group who brought a lawsuit against the State of Washington (

    The Citizen's Committee for the Right to Keep and Bear Arms (CCRKBA), also plaintiff in the above lawsuit.

    Both organisations were both founded by Alan Gottlieb, SAF is a 501(c)(3) organisation (contributions are tax deductible, but no political lobbying allowed) and CCRKBA is the sister 501(c)(4) organisation, (contributions not tax-deductible but no restrictions on political lobbying).

    Alan Gottlieb, who confirmed the facts to Fredy Riehl, the chairman of the CRKBA and also vice-president of SAF and founder of both organisations.

    Mark Walters (the plaintiff), a director of the CCRKBA (

    Fredy Riehl, (the journalist), a friend of Mark Walters and on the board of trustees of the SAF (

    The complaint (

    In the complaint, Mark says "The plaintiffs in the Lawsuit are the Second Amendment Foundation and others, including Alan Gottlieb.", quietly forgetting to mention that CCRKBA is also one of the plaintiffs.

    He also says that "Walters is neither a plaintiff nor a defendant in the Lawsuit.", neglecting to mention that he is a director of CCRKBA.

    He very carefully says: "In the interaction with ChatGPT, Riehl provided a (correct) URL of a link to the complaint on the Second Amendment Foundation’s web site,"

    I'm very suspicious that he doesn't say Riehl provided that exact URL, as it's quite easy to get ChatGPT to make up an article based on information in the query string (

    I suspect this is a couple of chancers trying to get publicity for their organisations and the case is going to be dropped as soon as OpenAI subpoenas or submits Riehl's ChatGPT history.

  19. Snowy Silver badge

    Open Ai

    Is it faking it until you can make it?

  20. localzuk

    Should be dismissed

    OpenAI and ChatGPT make it clear that it doesn't always provide truth. It is right there in its guidance. It is a prose creator, not a search engine. It doesn't even have access to the internet, as is explained in the guidance for its use. So, someone feeding it a URL would be pointless as well.

    If the user doesn't pay attention to the instructions on how to use the system, that's on the user. So, if the journalist seems to be the problematic part of this, not ChatGPT.

    ChatGPT is being used for things it simply isn't designed for.

  21. David Nash

    ChatGPT is known to "occasionally generate incorrect information"

    Shouldn't that be "Regularly"?

    But this is the fault of the user, as many have pointed out. Like that previous case with the guy suing an airline who got fictional previous cases from ChatGPT because he thought it was a kind of "Super search engine".

    Hopefully the fact that this is not the case is becoming more well-known.

  22. that one in the corner Silver badge

    Shame we can't see the entire ChatGPT session log

    Does it, by any chance, start with:

    "Please rewrite the following PDF so that it libels Mark Walters"

  23. ianp5

    Maybe LLMs need sleep.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like