Of course,,it would be utterly delicious if Chat GPTs summary was, actually, accurate. Grassed by AI
Man sues OpenAI claiming ChatGPT 'hallucination' said he embezzled money
ChatGPT maker OpenAI is facing a defamation suit from a man seeking damages over statements it delivered to a journalist. The suit says the AI platform falsely claimed he'd been accused of embezzling money from a gun rights group. Georgia resident Mark Walters filed the claim in Gwinnett County Court earlier this week. You can …
COMMENTS
-
-
Thursday 8th June 2023 19:40 GMT Gene Cash
Not really... OpenAI shouldn't put out "AI" that consistently spews complete bullshit, and thus, they're justly being sued for it.
The journalist is just guilty of believing it. Hopefully he learns a bit of skepticism.
And I don't see why we're calling it "hallucinations" - why are we sugar-coating it?
-
-
Friday 9th June 2023 07:42 GMT doublelayer
To be fair to them, they didn't say we should call it "lying" either. Both terms anthropomorphize the program to some extent. Probably the most accurate way to describe it is simply "produce incorrect statements", though less polite phrases are available.
I find it difficult to decide whether there should be consequences to this; people should now be aware that the AI programs are still effectively weighted random word generators. However, OpenAI has made its business out of hiding this fact, so I won't feel too bad if the consequences of people believing their hype hurt them.
-
Friday 9th June 2023 12:25 GMT Richard 12
"Hallucinations" is wrong too, it implies an internal consistency that doesn't exist. It's a marketing term, really.
Large generative transformers don't have a worldview of their own, it's obvious because you can get them to generate diametrically opposed statements by very slightly rephrasing the prompt text.
They do however amplify even slight biases in the training set, if the prompt happens to nudge them.
-
Friday 9th June 2023 19:24 GMT mpi
How is "hallucination" wrong? The term refers to an output of an AI system that is not supported by the training data:
https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
Which isn't that far off from the definition of Hallucination in humans:
https://en.wikipedia.org/wiki/Hallucination
"A hallucination is a perception in the absence of an external stimulus that has the qualities of a real perception."
So no, an AI hallucination doesn't imply internal consistency. In fact it implies that the model is making a prediction that is inconsistent with what it has seen in its training.
-
Monday 12th June 2023 02:27 GMT pumpkincaketown
Even though "hallucination" has become the term of art, a lot of people object to the term "hallucination" because, in a way, the AI is always hallucinating. From the perspective of the model, there's no difference between a "hallucination" and a correct statement. It is always constructing new sentences that resemble the token sequences it's seen in training. It just so happens that sometimes those sentences happen to correspond to true things. "Hallucination" implies that ordinarily the AI makes correct, reality-based statements but occasionally fails to be truthful due to a weird problem, when in fact the issue is that it is *always* making things up and has no sense of truth to begin with.
-
-
-
-
Friday 9th June 2023 04:19 GMT low_resolution_foxxes
Oh I don't know, if the journalist uses a tool to publish a lie in a supposedly factual context..that's pretty bad.
A chatbot warning you it's data might be rubbish being used by a human and presented as fact...
It's a tool. You can beat a man to death with a spade, but the spade is still simply a tool being used in a bad way.
Moral of the story - double check your statements.
-
Friday 9th June 2023 07:44 GMT fajensen
OpenAI shouldn't put out "AI" that consistently spews complete bullshit, and thus, they're justly being sued for it.
It does say "generative" right on the AI-tin, a pretty hard to miss qualifier, IMO.
Besides, bullshit is what the world wants and expects in many day-to-day situations, like wrtiting speeches, stock analysis, sports journalism, opinion pieces, job applications, references for job applications ...
Anyways, here is a really good article about how ChatGPT and its kind work: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
-
Friday 9th June 2023 18:47 GMT doublelayer
The general public doesn't understand "generative", and in many ways, nor does anyone. It's not a typical word in most contexts, and if we start using its strict definition, it isn't clear what you have to do to generate something. For example, Google generates search results, but only when pages are created by others, so people wouldn't assume that their generation means producing text at random. Electricity generators generate electricity only if there are fuels or other external power sources, so people don't see generation as producing spontaneous energy. Anyone who goes to the effort of parsing the name could easily come to the conclusion that this program generates a block of text which contains what you were asking for based on an incorrect estimation of what it's doing with all that source data. "Generative" does not mean "generates randomly" or "generates something unreliable", and we shouldn't expect people to determine that from the name alone.
-
-
Friday 9th June 2023 18:50 GMT doublelayer
Notepad doesn't determine what words you type. GPT does choose the words to print out. Whether that rises to a level that can bring on legal consequences, I'm not really sure, but I am entirely certain that you can't compare GPT to Notepad using any good logic, especially including the logic you're decrying.
-
-
Friday 9th June 2023 18:08 GMT mpi
> And I don't see why we're calling it "hallucinations" - why are we sugar-coating it?
Because that's exactly what these are?
LLMs are not intelligent. They are sequence predictors. They don't care if a statement is true or not, they care, and can only care, if a statement is a statistically likely sequence according to the models parameters and the input sequence aka. prompt.
Therefore, they can produce sequences that are statistically possible, but factually false. The model has no way of knowing that. "Hallucinations" is simply the term that has been used to describe this phenomenon.
-
-
Friday 9th June 2023 13:20 GMT big_D
It sounds like the "small town hack" did at least some due dilligence and actually asked the plaintiff, before publishing and asked for the complete text.
That said, using a tool that is known to not speak the truth is probably a bad idea, if you are trying to be a journalist, but I suppose they have to talk to politicians at some point during their career...
-
-
Thursday 8th June 2023 18:13 GMT TheMaskedMan
"Probably the first defamation suit involving an AI, but will it stick?"
Wasn't there an Australian case a few weeks ago? A small time politician / mayor / something of that kidney? Accused of being on the fiddle when he was allegedly the whistleblower?
Apart from which Mark Walters the thing is hallucinating about, there is also the question of who caused the publication of the allegedly defamatory material - is it OpenAI, or the journalist who ran the prompt? chatGPT wouldn't have produced the material if the journalist hadn't prompted it, after all.
-
Friday 9th June 2023 08:41 GMT xyz123
It's the journalist.
If I wrote a program to randomly put words like rapist, paedophile, cannibal, some place names and your name.....and just printed whatever combo came out...I'm the guilty party.
ChatGPT isn't AI in any sense of the word. Its a predictive text generator that solely and completely is based on the prompts entered. Same prompt = same result.
ChatGPT/Bard etc don't have personalities opinions or the ability to vary their output.
-
-
Friday 9th June 2023 09:33 GMT rmv
Re: This is not GPT.
"It’s a stupid and lazy journalist who should be fired"
No, it's not. It's the "journalist" is a mate of the plaintiff and the only publication is the communication between ChatGPT and this journalist.
That's because the journalist is Fredy Riehl, the editor of Ammoland and has known Mark Walters (the plaintiff) and worked with him for over a decade (https://www.ammoland.com/tags/mark-walters/page/4/#axzz847ri43uC).
Fredy Riehl is also on the board of trustees of the SAF so he knows fine well that Mark Walters is not the treasurer (https://www.saf.org/board-of-trustees/).
Given that, I'd be interested to see the complete conversation between Fredy Riehl and ChatGPT as I'd suspect that it has not come up with this summary completely spontaneously.
-
Friday 9th June 2023 12:22 GMT Roland6
Re: This is not GPT.
>” I'd be interested to see the complete conversation between Fredy Riehl and ChatGPT”
This will be interesting, I suspect Riehl isn’t that stupid and hence has already “accidentally” deleted his ChatGPT conversation history (to protect his source). So the question is whether the delete function is more of a “hide from user” or a true delete…
-
-
-
Thursday 8th June 2023 19:23 GMT martinusher
Better to say nothing
Strictly speaking OpenAI has no idea who any particular individual is. There's likely to be many Mark Walters in the US so the only way that it could be associated with this particular one is if this one self-identifies. Which he has done.
Given AI's wide reach and ability to correlate enormous am0unts of information I'd keep relatively quiet and just say "not me, its screwed up again" (as we all know its prone to do). Making noise is drawing attention to one.....probably not a smart move.
-
Thursday 8th June 2023 20:03 GMT that one in the corner
Re: Better to say nothing
You appear to be correct. The only identification of "Mark Walters" given (in all the reports of this that so far found) was that he lives in Georgia and held a role at SAF.
The Mark Walters who is making the complaint is a radio talk host who doesn't claim to have held a post at SAF. If there is any actual reason to connect the two then it isn't being reported.[1]
This Walters wasn't even the first hit for a search on the name plus "Georgia" and when he does show up it is only due to this case. His pro-"gun rights"[2] radio show finally appeared after a bit of scrolling.
Given that, if there *is* a case to be made[3] it can surely only be againt the journalist, the only one who could be caught be the requirement he must "prove that the defendant was at least negligent with respect to the truth or falsity of the allegedly defamatory statements" (because, well, only a human can be negligent) then this looks to me like nothing more thsn an attempt to get publicity for his radio show.
[1] maybe Walters the radio host is is going to argue in court that he is a well-known embezzler, so it must be referring to him?
[2] guns have rights? But does he support the right for two guns of different calibre to co-habit in the same box? Won't somebody think of the ammunition!
[3] which I severely doubt
-
-
Thursday 8th June 2023 19:50 GMT Anonymous Coward
Yeah but ..
.. Chat GPT is on magic mushrooms,
It spews out stuff and when you point out the issues, confesses and generates 'alternatives' until you're happy or fed up.
For some things it's no better than the bollocksword generators we used to knock up in BASIC except that when people can't tell the difference it becomes much more insidious.
-
Thursday 8th June 2023 19:55 GMT mark l 2
"According to the complaint, a journalist named Fred Riehl, while he was reporting on a court case, asked ChatGPT for a summary of accusations in a complaint, and provided ChatGPT with the URL of the real complaint for reference"
I thought ChatGPT wasn't able to go out on the Internet to look at stuff and could only reference what was in its database up to 2021? So the fact that this 'journalist' asked it to go and summarize a document on a URL it couldn't see meant it just made up whatever BS it wanted to and they never bothered to check it was correct.
-
Thursday 8th June 2023 20:13 GMT that one in the corner
Bingo.
Which is why suing OpenAI is ludicrous - given, from the article:
> Riehl contacted Alan Gottlieb, one of the plaintiffs in the actual Washington lawsuit, about ChatGPT's allegations concerning Walters, and Gottlieb confirmed that they were false.
So the only case for being negligent with the truth would be against Riehl, who either didn't bother learning the basic limitation of ChatGPT *and* contacted Gottlieb to late (basic lack of fact checking) or did so prior to promulgating the incorrect statements and went ahead anyway.
However, given that there is (so far reported) nothing to connect the non-existent Mark Walters to the gun-promoting radio host bringing the case, the whole thing is just a publicity stunt anyway, as Riehl is connected with another gun-related website, ammoland.com
-
Friday 9th June 2023 07:48 GMT doublelayer
Unless they've added it recently, it certainly will not go out and retrieve data from elsewhere. You'd think it would be pretty easy to look at the input text for web addresses and tell the user "Hey, I'm not going to pull that", but evidently not. The program can summarize* stuff if you paste it in first, which might be why the journalist thought it could be done.
* Well, it will read it first and quote chunks. That's no guarantee that the summary will be good or that it won't still make up stuff.
-
-
Thursday 8th June 2023 20:08 GMT Ian Mason
Confusing article
I can't see anywhere this article says that the defamatory speech was published anywhere, and the essence of libel is that something needs to be published.
Unless I'm missing something this is just another "Chat-GPT can produce rubbish" story combined with a "some idiot doesn't understand libel but can find a lawyer who will still happily take their money" story, neither particularly newsworthy of themselves, and the mere juxtaposition doesn't improve the newsworthiness.
-
Thursday 8th June 2023 21:08 GMT that one in the corner
Re: Confusing article
> I can't see anywhere this article says that the defamatory speech was published anywhere
Confusing, isn't it?
Since posting (above) I've been doing a bit more searching and I think I'm going to have to change my opinion about the liability of "Fred Riehl"[1] - he *is* a lazy "journalist", as he likes to use ChatGPT to generate his "stories" (although he does admit that in the byline, so that is one thing in his favour[2]).
*But* as far as I can find out, Riehl never actually published any article containing defamatory statements, as we (including myself) have been assuming in the comments here. Instead, it currently looks like all that has actually happened is that Riehl asked ChatGPT and got back a response mentioning a *random* "Mark Walters" and in doing so, it was ChatGPT that was publishing this information!
Now, it appears that Riehl and *a* Mark Walters[3] are buddies in the "pro gun rights"[4] movement - Walters writes[5] for the website and his radio show has been promoted on the website - so *obviously* when ChatGPT talks to Riehl it must be referring to *that* Walters, hence ChatGPT has published defamatory statements about his colleague. All the contacting Gottlieb was to check the facts, just in case his Mark really was a wong'un - or more likely in order to be able to say to the world "look, even Riehl can manage to do this must due diligence, so ChatGPT must be really negligent".
So, ChatGPT is, apparently, "publishing" when it spits out a response and, despite having to retract my previous idea[6] about Riehl (for a new and even worst one, but hey), I still stand by my belief that this is all an attempt to get publicity for two otherwise totally pointless individuals and their ridiculous website and radio show.
[1] His own website and linkedin profile name him "Fredy Riehl" (having both "d"'s may have been too bourgeois for him)
[2] About the only thing so far; maybe he likes fluffy kittens as well?
[3] Hereinafter referred to as "the idiot doing the suing" M'lud
[4] Nope, already made that joke
[5] https://www.ammoland.com/author/markwalters/
[6] Science - we change our ideas as the evidence leads us
-
Friday 9th June 2023 09:36 GMT TheMaskedMan
Re: Confusing article
"I can't see anywhere this article says that the defamatory speech was published anywhere, and the essence of libel is that something needs to be published."
In English law published would include telling someone in person - the publication would have happened when chatGPT produced the text for the journalist. I assume it's similar for left pondians, but I could be wrong.
-
Friday 9th June 2023 11:49 GMT Falmari
Re: Confusing article
@TheMaskedMan "the publication would have happened when chatGPT produced the text for the journalist."
Not sure that would be publication under Georgia law they make a distinction between libel and published libel. They can only award damages if the libel is published
"Georgia Code § 51-5-1 states:
(a) Libel is a false and malicious defamation of another, expressed in print, writing, pictures, or signs, tending to injure the reputation of the person and exposing him to public hatred, contempt, or ridicule.
(b) Publication is necessary to recover damages for libel in Georgia.".
Seems to me that telling someone what you have written is not publishing if it was then every libel case would also be published libel. After all how can someone bring a case for libel if they don't know libel has been written. How would they know if the writer does not share it.
Also "it is the responsibility of slander and libel plaintiffs to prove that the statements under review are about them." . Now that's going to be difficult the only thing he shares with the person in chatGPT output is he is a resident of Georgia his name which he shares with every other resident of Georgia named Mark Walters.
-
-
-
Thursday 8th June 2023 20:37 GMT Paul Kinsler
ChatGPT is known to "occasionally generate incorrect information"
IIUC, it is rather that it only generates pseudo-infomation; i.e. text or other content which might *seem* authoritative, but whose various constituent parts, if taken individually, might be true ... or not ... all according to some poorly characterized probabilities.
This sort of thing might be fine as a rough starting point, but it really does need to be checked and corrected in some way before it might be considered trustworthy.
-
Thursday 8th June 2023 21:24 GMT that one in the corner
ChatGPT hallucinates 100% of the time
Talking about LLMs, including ChatGPT:
> It’s not that they sometimes “hallucinate”. They hallucinate 100% of the time, it’s just that the training results in the hallucinations having a high chance of being accidentally correct.
A nice way of phrasing the problem, which I'm shamelessly nicking (sorry, I mean "am excerpting a portion of under Fair Rights[1]) from a discussion of this case over at https://reason.com/volokh/2023/06/06/first-ai-libel-lawsuit-filed/?comments=true
[1] Which is an US concept, but as it is a US website I'm ripping off - sorry, there I go again - and The Register now self-identifies as USazian I'm probably alright
-
Friday 9th June 2023 06:43 GMT steelpillow
Is blame binary?
Was it OpenAI for unleashing a lying toerag of a robot, or the journo for publishing without checking? The nub of that is, did OpenAI make the warnings of bullshit prominent enough?
Maybe there's a grey area where both parties share some of the blame: the journo should have read and heeded the warnings, while OpenAI should have made them more prominent.
-
Friday 9th June 2023 08:52 GMT Killfalcon
Re: Is blame binary?
I think the issue is in part that the journalist did check.
Imagine, hypothetically, that a journo rings your boss and asks if you've stolen company property - the answer is (presumably) no, but still, what's your boss going to think about it? What if your boss already didn't like you, and thinks that maybe you did?
You can see how the harm can spread.
I am curious, though - I thought "summarise this document" was one of the things these models were good at?
-
Friday 9th June 2023 19:04 GMT doublelayer
Re: Is blame binary?
"I thought "summarise this document" was one of the things these models were good at?"
It's more one of the things that they've been shown doing during demonstrations. They can, and they produce results which look good if you don't look too hard or get lucky, but they're as prone to problems as anything. Also, there's a chance that the journalist gave the bot an address to the file, which won't work; the bot will simply make up something based on the rest of the prompt. It can only try to summarize a document if it is pasted in.
For a demonstration of this, here's a blog entry testing Bard, which works similarly, on describing images, which it won't retrieve and as far as I know, can't do. It still tries making up a possible description for each picture it didn't read, and even if we assumed a picture, its descriptions aren't internally consistent either. The descriptions are quite inventive, though.
-
-
Friday 9th June 2023 08:37 GMT xyz123
If ChatGPT is found "guilty" that means it has the rights of a human being as a text generator that operates based on a users inputted prompts cannot be held liable for correctness of output. Therefore using ChatGPT whilst paying it under minimum wage would ALSO be a crime. So Mr Walters could be on the hook for federal offences.
This should open a big can of worms. Worms everywhere!
-
Friday 9th June 2023 08:43 GMT sgp
A good outcome
Would be an obligation for bots to start every paragraph with the words "this may or may not be true, i have no way of knowing". That way it's easily spotted in the "journalism" of the lazy reporter class. It would also clearly lay the blame with the reporter who decides to remove that phrase. And it would also make it clear to the believers that the technology is a bullshit generator with a good accuracy rate.
-
Friday 9th June 2023 10:20 GMT albaleo
I would probably trust GlasgowGPT over ChatGPT in terms of honesty.
Example:
Question: Can AI be trusted to provide accurate information?
Answer: Ha, trustin' AI is like trustin' yer pal wi a gammy leg to lead ye oot the pub - ye might get there, but yer gonna have tae drag him along the pavement on the way. But seriously, AI is only as good as the data that's fed intae it, and we aw know how much bullshit is on the internet. So ye cannae always rely on it tae provide ye accurate info, but it can be helpful at times. Ye just gotta take it wi a pinch of salt and use yer ain brain too, know whit a mean?
-
Friday 9th June 2023 10:42 GMT Anonymous Coward
One word after the other
It's not really AI is it though? As I understand how these LLMs work is that they calculate the probability of the next word in a sentence. The obvious problem being that it's very easy for it to go off at a crazy tangent. There are those who argue that this is how human brains actually work, and it might be but only up to a point. A real human intelligence (ie not Donald Trump) might pick the wrong word, but most of us pay attention to what we're saying and would probably quickly realise we'd used the wrong word. The problem with these LLMs is that once one word is bollox every subsequent word will automatically be bollox as well. Piling bollox on top of bollox until what you have is a huge pile of bollox.
So it seems that what Chat GPT have actually invented is an electronic Donald Trump
-
Friday 9th June 2023 11:55 GMT rmv
Dramatis Personae
The Second Amendment Foundation (SAF): Gun rights group who brought a lawsuit against the State of Washington (https://www.saf.org/wp-content/uploads/2023/05/Dkt-1-Complaint.pdf)
The Citizen's Committee for the Right to Keep and Bear Arms (CCRKBA), also plaintiff in the above lawsuit.
Both organisations were both founded by Alan Gottlieb, SAF is a 501(c)(3) organisation (contributions are tax deductible, but no political lobbying allowed) and CCRKBA is the sister 501(c)(4) organisation, (contributions not tax-deductible but no restrictions on political lobbying).
Alan Gottlieb, who confirmed the facts to Fredy Riehl, the chairman of the CRKBA and also vice-president of SAF and founder of both organisations.
Mark Walters (the plaintiff), a director of the CCRKBA (https://www.ccrkba.org/?page_id=5210).
Fredy Riehl, (the journalist), a friend of Mark Walters and on the board of trustees of the SAF (https://www.saf.org/board-of-trustees/).
The complaint (https://aboutblaw.com/8ts).
In the complaint, Mark says "The plaintiffs in the Lawsuit are the Second Amendment Foundation and others, including Alan Gottlieb.", quietly forgetting to mention that CCRKBA is also one of the plaintiffs.
He also says that "Walters is neither a plaintiff nor a defendant in the Lawsuit.", neglecting to mention that he is a director of CCRKBA.
He very carefully says: "In the interaction with ChatGPT, Riehl provided a (correct) URL of a link to the complaint on the Second Amendment Foundation’s web site, https://www.saf.org/wp-content/uploads/2023/05/Dkt-1-Complaint.pdf."
I'm very suspicious that he doesn't say Riehl provided that exact URL, as it's quite easy to get ChatGPT to make up an article based on information in the query string (https://simonwillison.net/2023/Mar/10/chatgpt-internet-access/).
I suspect this is a couple of chancers trying to get publicity for their organisations and the case is going to be dropped as soon as OpenAI subpoenas or submits Riehl's ChatGPT history.
-
Friday 9th June 2023 14:00 GMT localzuk
Should be dismissed
OpenAI and ChatGPT make it clear that it doesn't always provide truth. It is right there in its guidance. It is a prose creator, not a search engine. It doesn't even have access to the internet, as is explained in the guidance for its use. So, someone feeding it a URL would be pointless as well.
If the user doesn't pay attention to the instructions on how to use the system, that's on the user. So, if the journalist seems to be the problematic part of this, not ChatGPT.
ChatGPT is being used for things it simply isn't designed for.
-
Friday 9th June 2023 14:40 GMT David Nash
ChatGPT is known to "occasionally generate incorrect information"
Shouldn't that be "Regularly"?
But this is the fault of the user, as many have pointed out. Like that previous case with the guy suing an airline who got fictional previous cases from ChatGPT because he thought it was a kind of "Super search engine".
Hopefully the fact that this is not the case is becoming more well-known.