
Great Job, Magistrate Judge Dinsmore!
And the next lawyer appearing before you and citing non-existant cases, or mis-citing true cases, should receive double that fine!
A federal magistrate judge has recommended $15,000 in sanctions be imposed on an attorney who cited non-existent court cases concocted by an AI chatbot. In a report [PDF] filed last week, Mark J. Dinsmore, US Magistrate Judge for the Southern District of Indiana, recommends that attorney Rafael Ramirez, of Rio Hondo, Texas, be …
... the next lawyer appearing before you and citing non-existant cases, or mis-citing true cases ...
There should be no such thing as "the next lawyer".
A 15K fine is not a deterrent.
Mark my words, there will be a great many more instances of this unprofessional behaviour as that AI shit evolves further.
The legal-asshole involved in this should be disbarred or have his licence suspended for a minimum of 10 years.
That will send the right message.
.
Lawyerbot: In the case of Shredder v. Splinter, circa 1987, the dumping of chemicals resulted in increased ability and intelligence to otherwise homeless, unemployed sewer dwellers.
My client Acme, dumped the radiated goop not to avoid disposal fees, but rather in an act of unselfish altruism, hoping to bring benefits to this city's disenfranchised population.
Judgebot: I'm not familiar with that case but the names do sound familiar, therefore I rule in favor of Acme. Case dismissed!
Rapid development to a more advance organism, by means of radiation induced variation, was one of the theories investigated in the 1950's to try to make Darwinian Evolution work without abandoning Charles Lyell's gradualist dictum (the idea implied by the word "evolution"). It didn't work, and Lyell Gradualism has since been rejected, but it's fun to see the way the idea radiation-induced ascent-of-man made its way into popular culture.
Good. This is a bit like a carpenter blaming his saw bench for making the wrong cut. Getting a powerful tool doesn't take away the responsibility for how it is used. The stupid lawyer should've a) learnt about his tools, i.e., LLMs and b) checked what it told him. Didn't his Dad tell him not to believe something because it's in the papers, on the news, OR ON THE INTERNET!
From the article "who knows how models are being used in marketing ..."
Somebody at the US subsidiary of one of my customers writes 1000-word long bits of waffle about their machinery (the kit is actually v good). All perfectly good English, but devoid of content, some general observations about their industry, but nothing that informs customers about the products. Pretty sure that's been written by "AI". So then their colleague in Europe sends the copy to me, I edit it, cut it to 300 words and add relevant information. Doesn't actually take that long if you understand the machines - so a nice little earner for me.
Incidentally yesterday I interviewed a developer at another customer and they mentioned they might give their telemetry units more processing power, so they could implement Machine Learning (studiously avoiding the term AI, so they've actually thought about it.)
This is why there needs to be two types of AI. One to generate paragraphs of waffle from a simple prompt to save a human the need to write it and another AI to summarise the paragraphs of waffle in to a few lines and save the need to read it. Both types already exist and the entire AI boom seems to be based on this.
One day my phone will have an AI controlled answering mode that detects any AI generated scams and defends me from them.
“Hello, this is James from Microsoft security calling”. “You are as much a Microsoft employee as I am a human end user”. “Oh sorry, didn’t want to annoy a fellow AI”. “Oh that’s no problem, but could you tell me your emergency self destruct code?”
It could be an even better earner if they cut out the middle men - the guy using the AI, the AI and the colleague who sends it to you - and just commissioned you to write it from scratch.
But then there'd be too much work to do.
In order to save time, they'd then be forced to use AI to help them write the draft versions of the paragraphs - and then edit them for brevity and add the important information back in...
Cue: Flanders and Swann
And a defence of 'I didn't know AI could get things wrong' was accepted? Congrats to the judge for actually fining him - but this hallucination was presented as factual information that someone relied on in court. I'd much rather the judiciary took this kind of thing seriously and levelled a fine that would make others think twice before pulling the same stunt.
Having avoid the considerable misfortune of being a left pondian I was puzzled by this phrase of the good judge.
Clearly from the context "a complete fabrication" was intended noting the curious association with textiles (fabric) and of course lies are invariably woven.
The dictionary has it that out of whole cloth implies something made from scratch which fits.
Why these lazy toads couldn't just look up the cases they were citing? The cases would all be digitised so not as though you're consulting a 20kg tome of vellum manuscript. Even just to verify the cases actually existed but the wise would ensure they were in any way relevant.
I would lock the blighter up with a year's law reports and pen and paper sufficient for him to transcribe those reports before releasing the toad. Judicial lines which I, in common with the Mikado, feel is a "punishment to fit the crime."
We know there are some good judges and lawyers out there catching this LLM fraud, but we wouldn't hear anything about the usage that slipped through without being detected.
I expect there will be cases that get reopened when fake citations are discovered after the case is decided.
Is there already software to go through submitted briefs and detect false citations? There might be money in going through all processing submitted briefs en masse to find those cases, and offering the cheated side pro rata services.
int main(enter the void)
...