back to article Anthropic’s law firm throws Claude under the bus over citation errors in court filing

An attorney defending AI firm Anthropic in a copyright case brought by music publishers apologized to the court on Thursday for citation errors that slipped into a filing after using the biz's own AI tool, Claude, to format references. The incident reinforces what's becoming a pattern in legal tech: while AI models can be fine …

  1. Anonymous Coward
    Anonymous Coward

    If you use an AI to prepare any kind of document for submission in court, make sure to pack a toothbrush.

  2. chivo243 Silver badge
    Facepalm

    The Jetsons

    Uniblab! Cue Hanna Barbera background music!

    "Our manual citation check did not catch that error." Manual check? hahaha ha. Ha.

    Side note, is there another icon missing from the gallery?

  3. Tron Silver badge

    Going up against a 'Special Master' with just AI is insane.

    Lightsabre your biggest need you will.

  4. Don Bannister

    Being lazy ...

    It said they had implemented procedures "to ensure that this does not occur again."

    How about forgetting AI and just check citations properly ?

    1. An_Old_Dog Silver badge

      Re: Being lazy ...

      Isn't this sort of "formatting" normally done by legal-office secretaries, who make extensive use of pre-formatted documents and templates?

      1. Doctor Syntax Silver badge

        Re: Being lazy ...

        It used to be but they're expensive.

        Having said that, a couple of us used to draft witness statements on the computer of the microspectrophotometer, print them out on the laser printer and then send them to the typists to be typed up using plain old mechanical typewriters on the correct stationery in the correct format or, as my colleague said, to get typing errors added.

  5. Neil Barnes Silver badge
    Stop

    Latham & Watkins has implemented procedures

    The best - safest - procedure is probably to stop using automated guesswork machines as part of your toolset when you are being paid for _your_ expertise and opinions.

    Or indeed, at any other time.

    1. Like a badger Silver badge

      Re: Latham & Watkins has implemented procedures

      That may be so, but there two simple drivers for the law firm that to an extent make the risk worth taking.

      The first is rapacious profiteering. Large law firm structures involve a lot of equity partners and are thus "flat", and that means that the partners control everything, and take home all the profits. They are (from personal experience some years ago at a large law form) acutely aware of the cost of everything. By using AI, they can re-deploy the office juniors on other billed work (and any cynic who knows anything about US law firms will conclude that the clients will still be billed as if the juniors worked on the case that did in fact use AI).

      The second driver is that AI offers a considerable speed benefit. I've been using both Claude and full fat Copilot lately for work, and as you undoubtedly know they're both impressively fast (my use is low risk, and checked), and effective when they do work. Now, the clients of top law firm clients place a truly inordinate value on speed of response, I know this as my role was amongst other things overseeing client and market research. So using AI at a law firm helps faster turnaround of briefs, court documents, sifting and searching opinions. Does make you wonder how many court cases have already contained AI-generated errors which weren't picked up by the court or the opposing law firm.

      The legal sector does need AI, but it needs an implementation that is prepared to trade the risk of missing something in return for never hallucinating and never getting facts wrong. We're a long way from that goal, because of the way that LLM are created and trained.

      1. Doctor Syntax Silver badge

        Re: Latham & Watkins has implemented procedures

        "take home all the profits"

        A few costly and well-publicised failures and the profits will disappear.

        1. Neil Barnes Silver badge
          Holmes

          Re: Latham & Watkins has implemented procedures

          Presumably the cost of Claude and its friends will appear as line items on the bill, so that the clients will know what they're paying for?

          1. Like a badger Silver badge

            Re: Latham & Watkins has implemented procedures

            "Presumably the cost of Claude and its friends will appear as line items on the bill, so that the clients will know what they're paying for?"

            Done properly by international law firm custom, Claude will be charged as a disbursement, then the client gets charged the time for the junior staff that operated Claude, and another line item for the law firm's own view of the "equivalent staff value" of the work Claude did. Obviously they'll then build up the costs with lines for mid-grade staff allegedly overseeing the juniors, then the partners leading the case. Plus any opportunity to cross refer to other partners in the firm - "you asked about finance restructuring, but I needed to consult the partners who specialise in tax and intellectual property, so you'll pay for their time too". I was technically "non-fee earning" and worked on back office business intel, but on occasions a partner would ask me for an analysis of what a big client did and where we might expand our offering to them, and they said client got billed for my time on how we might ream them better.

            And then they add 20% to all lines as a matter of principle.

        2. Like a badger Silver badge

          Re: Latham & Watkins has implemented procedures

          There's been a few major law firm collapses, but in general they're remarkably difficult to kill (and unlike vampires, garlic and sunlight don't work). The law firm business model is that normally clients take all of the risk and the law firm still gets paid whether the client's case is won or the client benefits from expensive legal advice*, so for the few occasions when the law firm might be held to account it doesn't create any existential risk. of the small number of major law firms that have collapsed, it seems to me that the cause is not directly related to poor quality advice or misbehaviour, but down to buggering up their own finances - borrowing more than they should to (in effect) support a partner pay-out, and then being unable to repay. US law firm Dewey and LeBouef went down for this reason back in 2012, perhaps the most high profile law firm collapse in the past half century. Because law firms are owned and managed by the partners, they're in quick trouble if their star partners leave - clients tend to follow these top names, and if a law firm starts looking like its in trouble then it's the best partners who bail first.

          * Most top law firms earn a fraction of their total fee income from litigation (contested cases in court), there's a whole lot more money in "advice" for debt transactions, whether working for corporate borrowers or the banking lenders, in advice on M&A, intellectual property and technology advice, competition law advice to regulated businesses or regulators. In the case of advice, it's usually not tested in court, and the client has to rely on the expensive legal advisor, meaning that the law firm gets paid whether the advice is good or not. In practice, it's also the case that corporate advice work is often simply a money-go-round for investment banks, lenders, law firms, consultants and other specialist advisors - the corporate clients would be served just as well and more cheaply by getting a third tier** law firm in for their needs.

          ** In law, everything is about the professional reputation of the individual partners involved, and there's a range of trade bibles that cover this. In my view paying top rate for a big hitting attack-dog litigator (sometimes for legal defence) can be justified, likewise the top barristers on specialist matters can often justify their fees, and sometimes competition lawyers. But corporate advice on a lot of the other stuff is often money for old rope***.

          *** There is no three asterisk comment.

          1. Anonymous Coward
            Anonymous Coward

            Re: Latham & Watkins has implemented procedures

            > clients take all of the risk and the law firm still gets paid

            I have personal experience of one civil case where the winning side declined seeking costs because the losing side had been so misled and screwed by their solicitors into pursuing their claim that it just didn't feel right to hurt them further.

            Suckered clients aren't usually so lucky.

        3. Anonymous Coward
          Anonymous Coward

          Re: Latham & Watkins has implemented procedures

          I don't think so.

          The risk is asymmetric: bonus payments for profit, never bonus reclaims for badly screwing up.

          At most such people are invited to find another place to absorb money.

      2. ChrisElvidge Silver badge

        Re: Latham & Watkins has implemented procedures

        Can we imagine an LLM trained on Star Trek scripts?

        Or, as this is a legal column, LA Law scripts.

        Note: Captain Kirk was in both.

        1. that one in the corner Silver badge

          Re: Latham & Watkins has implemented procedures

          "It's a citation, Jim, but not as we know it."

          1. Anonymous Coward
            Anonymous Coward

            Re: Latham & Watkins has implemented procedures

            And so, just before the end of the week, we discover another Comment Of The Week.

            Congratulations :)

  6. OldSod

    Replace "AI" with "computer system" and see how bad they look

    The poor quality of the firm's citations was caused by the computer system they used to generate the citations. Throwing "AI" into the explanation just clouds the issue. Their software sucks, yet they used it to generate citations for a legal filing to the court. That sounds irresponsible to me.

    1. Brewster's Angle Grinder Silver badge

      Re: Replace "AI" with "computer system" and see how bad they look

      And worse, it sounds like the sort of thing a conventional algorithm could do reliably; just a database query and formatting the output.

      1. that one in the corner Silver badge

        Re: Replace "AI" with "computer system" and see how bad they look

        No need for the database*: they already had the citation & all its details, it just needed formatting.

        * Maybe just a little one, say a simple list of BibTeX references, look the paper up in that to get the citation in a standardised format, which can be fed into some kind of formatter, like, oh, I dunno, a (La)TeX macro that spits out whatever is needed to merge it into the rest of the filing.

        1. Brewster's Angle Grinder Silver badge

          Re: Replace "AI" with "computer system" and see how bad they look

          "Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors.

          (My bold.) So it seems to have been intended that the AI add the authors. I naively assumed the "link to the provided source" was a link to a legal database of judgements and could be queried for the remaining information. But if not, I'm sure it could have been regex'd out the source.

          The point is this is a perfectly automatable job. Any of us having to do more than a dozen links would write a script. These lawyers, however, don't have those skills and would never think to commission devs to do it. (And if they did, what would come back would be unwieldy and unusable.) AI does have the potential to bridge this skills gap and bring the automation to the masses. It's just that, in this case, it would have been better to get it to write the script and then spending some time informally verifying it (and getting the AI to correct it) and then using that script.

          1. John Brown (no body) Silver badge

            Re: Replace "AI" with "computer system" and see how bad they look

            "These lawyers, however, don't have those skills and would never think to commission devs to do it."

            I wonder what qualifications and experience Ms Chen the Data Scientist has?

  7. PinchOfSalt

    Error correction

    At a previous employer we were using forms of AI and other methods to validate that marketing content complied with client brand guidelines.

    It was actually reasonably good at this and better than humans.

    The reason for this is that it didn't get bored and it was just as adept at spotting patterns as anti-patterns, whereas humans are only really good at pattern matching and very quickly tired of trying to identify anti-patterns as it's very intensive brain work.

    So, having AI draft something and then asking a human to proof read it is not exactly ideal.

    There is of course no corresponding data to show how many times citations are incorrectly made in court through pure human error as that's not very newsworthy, so before we through this under the bus, probably worth understanding that.

  8. tiggity Silver badge

    Given the eye wateringly high charges that usually come from legal companies, then surely using "AI" (rather than a properly trained & qualified human) should be illegal / fraud?

    The customer is paying for human legal expertise, not half arsed "AI" hallucinations.

    1. Anonymous Coward
      Anonymous Coward

      "The customer is paying for human legal expertise"

      That's what they THINK they are paying for, but we still don't know whether AI makes more or less errors than meatsacks. We hear about AI errors because that annoys judges and is therefore newsworthy, but if a case gets tossed because a meatsack fouled up the paperwork then that's not news, and the offending law firm usually only has to redo and refile the papers.

  9. Anonymous Coward
    Anonymous Coward

    Trust

    If the suppliers don't trust their own tools then you can't trust it for critical work either.

    If 'autopilot' actually means I have to be in charge to monitor, it's not autopilot.

    If the AI suppleir still states 'this will make mistake you need to check everything' then it's not suitable for work where accuracy is critical.

    It's the inconsistancy that is the catch with AI, it can work fine for a while then randomly have a pychotic episode. And it doesn't hallucinate in obvious ways, it's usually very subtle. I would be in this court case on the submissions the hallucination look completely believable until you check them.

    And that the catch, the business thinks that AI is going to bring superhuman improvments in efficiency but that can't happen if you need to check everything the machine writes with a fine toothed comb. And this story is what will happen, the board won't accept the time needed to verify AI output and will push the employees harder, they will stop checking the AI's work in order to deliver at the efficiency demanded by the board.

    At some point the poop will start hitting the fan with increased regularity and larger poops

    1. John Brown (no body) Silver badge

      Re: Trust

      As I've said many times, AI is trained to never say "I don't know", which is probably the biggest single problem with it.

      1. silent_count

        Re: Trust

        @John Brown

        Of course you're right but nobody is going to pay billions for what amounts to

        10 print "I don't know"

        20 goto 10

        It seems that people will pay a lot for programs which say, "Yes, I'm sure that info is correct", even when they're wrong some of the time.

  10. that one in the corner Silver badge

    Anthropic data scientist Olivia Chen meets legal formatter

    According to the pdf linked to in TFA, Chen provided a reference to an article in The American Statistician, with all the usual volume, page number etc.

    Maybe I'm just wildly out of touch with the whacky hijinks of American statisticians and the academic qualities of a commercial "data scientist", but boring old UK physicists, cosmologists and the like, use standardised formats for all their citations - we still see the unmistakable (as in, neat, well presented) signs of (La)TeX in their handouts (even if some of them use a GUI frontend, WIMPs) with a quick'n'easy citations list at the bottom pulled from the BibTeX list.

    Wouldn't one expect an academically-inclined "data scientist" to have their citations similarly organised, so that any formatting needed for presentation in any document was a simple mechanical task, perfect for any distinctly non-LLM-powered PC to do?

    Wouldn't one expect any firm that deals with representing scientists of any stripe to have already encountered such academic citations and have the necessary mechanical formatting just one simple wordprocessor template away?

    Probably a complete pipedream, hoping that two sides would be as organised as their claimed professions would imply, let alone that they could manage a simple bit of textual impedance matching without the use of Claude.

  11. spireite

    and they say 'trust AI for writing code'

    The death of the developer is very much overestimated.

    It can't be trusted with simple tasks frankly.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like