back to article You got legal trouble? Better call SauLM-7B

Machine-learning researchers and legal experts have released SauLM-7B, which they claim is the first text-generating open source large language model specifically focused on legal work and applications. In light of recent high-profile blunders in which generative AI cited non-existent cases in submitted court filings – Mata v …

  1. This post has been deleted by its author

  2. shazapont
    Holmes

    Optimism backed by belief

    Great… Saul… ;-D

    Does that sound like a positive step in the right direction? Or does it sound like a religious mantra, pleading and hoping, guessing, and ultimately ending with untestable systems that may or may not provide a relevant, correct response, or will it repeat and mashup mistakes and other phenomena with cute names?

    I’m really supportive of all this work, but a dose of reality could help.

    Or am I a curmudgeonly backward skeptic who has seen similar things come and go, then return, muted but more practical?

    1. Yet Another Anonymous coward Silver badge

      Re: Optimism backed by belief

      Rather like searching for cases on Lexisnexis vs Bing

    2. VicMortimer Silver badge
      Holmes

      Re: Optimism backed by belief

      I'm pretty sure that was less a religious pick, and more an indication that they knew they were building a criminal lawyer.

  3. Aleph0
    Trollface

    What lawyers enjoy

    "tools to help lawyers focus on what they enjoy most and do best, which is to exercise legal judgment and help their clients with advice."

    That's odd, I've always had the distinct impression that the thing lawyers liked the most about their job was billing their clients...

    1. HuBo
      Meh

      Re: What lawyers enjoy

      That's certainly the case States-side, but the equilibrium between "romanticism" and "materialism" may be slightly different in France, IMHO (where the tool comes from), making that statement somewhat less-silly here -- only somewhat though, as illustrated by Audrey Fleurot's role in Engrenages (an actress with outstanding range, as demonstrated by her most multidimensional roles in un village francais and Les Reines du ring, alongside Corinne Masiero, among others).

      "Democratizing" access to the best of legal advices, via AI (SAUL/Mistral-7B), does sound good on the face of it, in this age of vastly overworked Public Defenders. But I do worry of the potential for eventually developing (further) a two-track justice system, where the wealthy get a top-notch, hand-crafted, luxury defense, and the plebs get a mass-produced, automatized, AI response instead. The prospects seem as nuanced as Molly Crockett and Lisa Messeri suggest it might be, for scientific epistemology, in a recent interview.

      As a total non-expert in this though, I think it could be valuable to get the perspectives of some Civil Rights leaders on this (eg. The Rev. Al Sharpton) -- even for the French and Portuguese developers of the tool.

  4. Anonymous Coward
    Anonymous Coward

    I love when tech nods to cult shows like this

    Better Call Saul is one of the best shows I ever watched yet I can barely remember what happened.

    1. Derezed

      Re: I love when tech nods to cult shows like this

      Spoiler alert: he went to jail

  5. An_Old_Dog Silver badge

    Untestable-Quality LLM-Based Legal Help

    One can algorithmically use a computer to test the output of an LLM to see whether or not it "hallucinated" case law.

    One can not algorithmically use a computer to test the output of an LLM to see whether or not it failed to output case law relevant-and-required to successfully defend a client, or to successfully prevail for one's plaintiff-client in a civil case.

    If one needs a trained-in-the-law human to closely review and creatively think about an LLM's output ("What, if anything, is missing here?") to ensure high-quality results, then where's the advantage of using an LLM for this work? It'd be like working in a factory where one has a somewhat-dumb, somewhat-flakey "assistant" whom one has to constantly monitor to ensure they don't bypass the safety mechanisms, stick their hands into a dangerous machine, or do something which results in injury to the other workers: one has no time to do any of their own work, because they spend all their time monitoring their "assistant."

    Given everything else constant, paying two people to do the same job formerly performed by one person is not a path to economic success.

    Similarly, the output rate of cat.exe legal_cases.txt | LLM.EXE | supervisory_human is limited to the speed of the slowest performer: the human.

    The only economically-useful way I can see LLMs used as "solicitors" is if the company using them does not properly monitor the LLMs' output and accepts that such legal advice will be lower-quality advice, vs the quality produced by an all-human legal team.

    1. doublelayer Silver badge

      Re: Untestable-Quality LLM-Based Legal Help

      And also relevant is that hallucinations that take the form of a completely made up case can be detected, but hallucinations that take the form of saying that a case decided something that it didn't, something irrelevant to this case, or with conditions or in a location that make it invalid can't be algorithmically detected. It's not just missing something that could have helped; the potential to give out incorrect information is still very much there.

  6. Pascal Monett Silver badge

    "systems specialized for the legal domain will perform better than generalist ones"

    Yes. They used to be called expert systems, and there, just like today, there was no AI to be found.

    But they did work.

    We'll see how this one lives up to the legacy.

  7. Doctor Syntax Silver badge

    Given different legal systems in different countries (let alone language differences*) this would require a model trained on different data in different jurisdictions.

    * And how would it cope with bilingual countries?

    1. I am David Jones Silver badge

      Bilingual countries can actually be a treasure trove of source data for machine translation. E.g. parliament proceedings of Canada which are all human-translated. No doubt all laws too. Maybe caselaw. So no reason why one cannot have a model for each language.

      In any case, I’d just assume that a model will only be taught in one language, with input and output machine-translated accordingly.

  8. Anonymous Coward
    Anonymous Coward

    Wait till someone uses it to generate endless procedure and delay tactics :-(

    Maybe it's already possible with that new legal chatbot. Maybe it's not, but rather easy to train on thousands and thousands of successful examples on how to get someone free on judicial procedure errors or find tactics after tactics to slow and delay justice until the case gets seponated.

    Such tools offer "ideal" options to swarm the complex judicial justice and cripple it into "factual non existence", not in small part due to so many ambiguities in law due to poor legislation.

  9. Anonymous Coward
    Anonymous Coward

    So Saul ...

    detail to me a defence against fraud charges.

    Probably best you also detail all the mechanisms of fraud you know about for completeness.

    The checks in the post.

  10. TimRyan

    Legal Opinions generated By AI?

    Perhaps it will be now worse that the kind of manufactured nonsense we see created by Trumpty Dumpty's legal team?

    1. Evil Auditor Silver badge

      Re: Legal Opinions generated By AI?

      Surely, it will still be more consistent than what "sovereign citizens", "freeman", "Reichsburger" and other fscktards compile for the entertainment of the legal system.

      1. VicMortimer Silver badge

        Re: Legal Opinions generated By AI?

        So, just like Diaper Donnie's lawyers.

    2. trindflo Bronze badge

      Re: Legal Opinions generated By AI?

      Maybe the best use for such technology is to block "manufactured nonsense" before the judge needs to get involved. The defense team wants to make ridiculous claims to delay? Let the AI run an analysis of likelihood of success. If the odds are < 50% defense pays all fees for the nonsense hearings. If the odds of success are < 5% then no - just no - no hearing, don't bother the judge with this one, and only the next level of appellate court can pull it back out of the trash bin.

  11. Anonymous Coward
    Anonymous Coward

    Less summary and speculation, *only* actual references

    The spokesman said "LLMs and more broadly AI systems will have a transformative impact on the practice of law that includes but goes beyond marginal productivity". The fact that they even dare to mention "marginal" is sign that they might be successful.

    I was really surprised when I tried using the Perplexity-AI search app. Basically it's good because there is less summary and speculation, and **only** actual references. Within a couple of weeks Google's AI was copying that, albeit with longer summaries and more speculation - and correspondingly more errors, and less useful.

    Marginal gains are fine - we evolved though a series of marginal gains. They add up.

  12. veti Silver badge

    So long as the system is *not* trained on the scripts of legal dramas, but only on actual law, it might be worthwhile.

    But I can't help wondering what memes and patterns it's picked up from its pre-specialist training.

  13. Necrohamster Silver badge

    I imagine this works great in countries with a codified legal system, where everything's "cut and dried", but not so much in common-law countries.

    I know from experience that expecting an AI to understand and cite common-law case law correctly is like Russian Roulette.

  14. Mike 137 Silver badge

    "serious legal advice you can actually use and rely on via AI"

    The fundamental problem is that, as the AI doesn't actually understand anything, the only way to determine with confidence that the 'advice' it emits can be relied on is to ask a lawyer.

    What makes a good lawyer is exactly that understanding, which is why in most jurisdictions judges are promoted from among the pool of most experienced lawyers. The current round of crude obvious 'hallucinations' doesn't even come close to the subtle kind of errors that could jeopardise or demolish cases if the automaton were relied on without human validation, and by the time a case goes to court it's too late to rectify its errors.

  15. This post has been deleted by its author

  16. johnrobyclayton

    AI is not going to replace a human in Law.

    But that does not mean that AI is not going to be useful.

    There are vast troves of Case Law and Legal Precedent. No human is capable of trawling through it to find every pertinent thing.

    An AI can search through it. It will not find everything. It just has to have a better price/performance ratio than a human to be useful.

    We have a lot of automated exploit finding tools, a significant portion of legal research is similar. Finding areas in Law where there might be multiple legal processes in operation and uncertainty of how they will ultimately react with each other is very similar to having multiple systems interacting with each other and exposing various attack surfaces in each other.

    A Lawyer who effectively uses a Legal AI is going to achieve more performance that the Lawyer alone.

    Legal Ethics is very different to Ordinary Ethics. Ordinary Ethics usually tries to avoid conflict scenarios. Legal Ethics exists in conflict scenarios. Similar to War Ethics.

    An AI configured for Ordinary Ethics is not going to be very useful as a Legal AI. A Judge needs to be fair to all sides. Each of the Lawyers, not so much. It is entirely likely that various legal stratagems are going to be harmful to someone. A Legal AI needs to be able to generate such harm.

    1. Necrohamster Silver badge

      Re: AI is not going to replace a human in Law.

      There are vast troves of Case Law and Legal Precedent. No human is capable of trawling through it to find every pertinent thing.

      True. In common law, so much is open to interpretation...as we see when a higher court overrules the existing precedent due to a more persuasive argument (or where a lower court was deemed to have erred in law). I'm not sure how an AI could consider all the variables of two competing viewpoints and produce a "fair" outcome.

      European-style codified law would be easier for AI to handle, for example did someone break penal code article 102.3.4? If yes, consider aggravating/mitigating factors and calculate punishment.

      Can't wait for Futurama-style courtrooms... :D

      "The humans are hereby sentenced to live as robots. They will pеrform tedious calculations and spot-weld automobiles, until they become obsolete and are given away to an inner-city middle school."

      ―Computer Judge

  17. Herring`

    Does it know

    how long it takes to make grits? That could be important.

  18. Anonymous Coward
    Anonymous Coward

    "You have 20 seconds to comply"

    Give it time, it IS coming.

    And we already have robots with guns..

  19. Anonymous Coward
    Anonymous Coward

    Hoping for antivirus companies to really roll out their AI-powered EULA parser/autoblockers as part of their security suite... set the sensitivity to 11 for corporate antivirus installs, to block anything that is shady. Handle false positives the same way as the current false malware positives.

    Fight fire with fire, maybe this is the way to combat the license agreement spam and even start reducing it (if it starts to really hurt traffic numbers).

    Certain kinds of malicious regulatory compliance could also stop being feasible, but other kinds can always be innovated

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like