back to article Texas judge demands lawyers declare AI-generated docs

After a New York attorney admitted last week to citing non-existent court cases that had been hallucinated by OpenAI's ChatGPT software, a Texas judge has directed attorneys in his court to certify either that they have not used artificial intelligence to prepare their legal documents – or that if they do, that the output has …

  1. katrinab Silver badge

    For drafting a contract, surely it is better to have something that asks questions about what you want in the contract, then pulls up the relevant paragraphs of lawyerspeak depending on the answers.

    For example, if it was a property lease, you would ask the jurisdiction, whether it was residential / commercial / agricultural, the length of the lease, the amounts due, the names of the parties, and maybe one or two other questions based on answers to the previous questions.

    That sort of thing, I'm sure lawyers have been doing since the 1980s.

    1. big_D Silver badge

      With tools they've had since the 80s... AI does the same, just with added halucinations.

  2. Eclectic Man Silver badge

    AI Judges

    "citing non-existent court cases that had been hallucinated by OpenAI's ChatGPT software"

    There was an article in the Financial Times (29 September 2022) 'AI-driven justice may be better than none at all' which supported the idea that an AI system could act as a judge in some cases, thereby releasing human judges for more difficult cases. It was supported a few days later by a professor of Law and Criminal Justice at an Institute for People-Centred AI at a UK University. Somewhat worrying therefore that the current most famous AI / LLM, ChatGPT makes stuff up for effect. How come the rules of ChatGPT do not insist that references to documents or events are genuine, and listed in a database?

    I wonder how many lawyers or judges would trust an AI / LLM to adjudicate if they were the accused.

    1. Chris Miller

      Re: AI Judges

      That would depend on whether they were guilty or not.

    2. Raphael

      Re: AI Judges

      Don't know. I was listening to a lawyer who is also a youtube (Good Lawgic) and he asked ChapGPT to provide the legal precedents for a case he had coming up. ChatGPT cited 5 cases, and 3 didn't actually exist.

    3. Paul Kinsler

      ChatGPT makes stuff up for effect

      Indeed, it does nothing but make stuff up. As I understand it, if some fact was often present (or heavily weighted) in its training set, then GPT might indeed be likely to get it right (e.g. what is the capital of France?)....but unfortunately there are lots of obscure and rarely repeated facts, which will not leave such a marked footprint, so GPT instead ends up just generating something with a plausible linguistic structure.

      At a slight tangent, here's an interesting read on AI that I just found down a rabbit hole:

      1. Paul Kinsler

        Re: ChatGPT makes stuff up for effect

        And now I think about it, for things such as citations (whether of the the legal Hanbury vs Brown-Twiss type, or the academic W. Blackstone, Journal of Things 22 (1955) type), their content is rarely that explicit unless you check, so it is not clear to me how an ordinary ML training process can be reasonably be expected to "understand" any more about what citations are than the way they appear in the text. To fix this, I guess, you have to train the AI to also follow and ingest the cited material, and not just the citation-as-bare-text itself.

        1. Kimo

          Re: ChatGPT makes stuff up for effect

          They seem to be very good at creating citations that match the format you ask for, but in academic papers ChatGPT will list a real researcher and journal title in the relivant discipline and make up the other parts. The way legal and academic journals are usually paywalled it would be difficult for a program to access and parse actual articles.

    4. Jedit Silver badge

      "'AI-driven justice may be better than none at all'"

      Has someone been watching Lexx again?

    5. Michael Wojcik Silver badge

      Re: AI Judges

      The role of judge differs considerably across jurisdictions, so as a generalization "AI as a judge" doesn't mean much. Of course, "AI" is also a largely meaningless term; unidirectional transformer models would be terrible judges for all but the simplest and most straightforward of cases, but there are many other possible architectures. Not that I'd want any of them presiding over my case, at least at the present state of the art, and even though many human judges are awful.

  3. Anonymous Coward
    Anonymous Coward

    After a New York attorney admitted last week to citing non-existent court cases ....

    Will ChatGPT make lawyers obsolete? (Hint: be afraid) Reuters, Dec 9, 2022 -- So it's already come to pass, at least for that New York attorney. We have to wonder, will ChatGPT continue to pick them off one at a time, until nobody is left ... but ChatGPT, J.D.

  4. Doctor Evil

    Pot, meet Kettle?

    "[...] Unbound by any sense of duty, honor, or justice, such programs act according to [...]"

    Pretty much how every lawyer I've ever had to deal with has acted too ...

    1. Brian 3

      Re: Pot, meet Kettle?

      and a lot of judges too - maybe something to do with also being lawyers...

  5. veti Silver badge

    Surely, lawyers are always responsible for the content of briefs they submit to a court. If a human researcher helps them draft it, or they run it through a spellchecker, the lawyer (licensed, insured) is still responsible.

    Why does AI need special rules?

    1. Phones Sheridan Silver badge

      It doesn’t need special rules, it just needs existing rules to be applied.

      As stated in the article.

    2. Richard 12 Silver badge

      Because far too many people think "the computer is always right".

      So they don't check.

      This is also the most immediate danger with "AI". A great many lawyers, politicians, judges, managers and other people with power are delegating that power to the likes of ChatGPT, assuming that it's "correct".

      The consequences of the amplified bias and hallucinations can be horrific.

      Lesswrong might be really concerned about "alignment", but it's not the pressing problem.

      1. Claptrap314 Silver badge

        And drivers.....

      2. Michael Wojcik Silver badge

        The consequences of the amplified bias and hallucinations can be horrific.

        Lesswrong might be really concerned about "alignment", but it's not the pressing problem.

        Er... that is an alignment problem.

  6. Mayday

    What we have here

    Is a glorified predictive text generator. That is all.

    The first time we saw the new logo we had a lot more of it and we had a little more than that and it is now the same.

    The preceding sentence was brought to you by my iPad’s text predictor. Doesn’t look too much different from what this lawyer used tbh.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like