Untestable-Quality LLM-Based Legal Help
One can algorithmically use a computer to test the output of an LLM to see whether or not it "hallucinated" case law.
One can not algorithmically use a computer to test the output of an LLM to see whether or not it failed to output case law relevant-and-required to successfully defend a client, or to successfully prevail for one's plaintiff-client in a civil case.
If one needs a trained-in-the-law human to closely review and creatively think about an LLM's output ("What, if anything, is missing here?") to ensure high-quality results, then where's the advantage of using an LLM for this work? It'd be like working in a factory where one has a somewhat-dumb, somewhat-flakey "assistant" whom one has to constantly monitor to ensure they don't bypass the safety mechanisms, stick their hands into a dangerous machine, or do something which results in injury to the other workers: one has no time to do any of their own work, because they spend all their time monitoring their "assistant."
Given everything else constant, paying two people to do the same job formerly performed by one person is not a path to economic success.
Similarly, the output rate of cat.exe legal_cases.txt | LLM.EXE | supervisory_human is limited to the speed of the slowest performer: the human.
The only economically-useful way I can see LLMs used as "solicitors" is if the company using them does not properly monitor the LLMs' output and accepts that such legal advice will be lower-quality advice, vs the quality produced by an all-human legal team.