Re: "they don't understand law and can't form sound arguments"
That's an interesting question.
As far as I am aware, the workings of even the most simple organic brain have not been modelled fully. Oh, I know that they've created neuron maps of fruit flies, but it still seems that we still don't know how these systems work to control the behaviour of the flies.
Neurons are not like computer decision making which is mostly binary at the most simple level, and the best we can do to simulate neurons is to apply some type of statistical bias to make them act mopre like simple brains.
So if we can't do a fruit fly, which has a few thousand neurons and in the order of a few hundred thousand synaptic sites (effectively connections between neurons), we don't have a chance in hell of really understanding complex brains, at least not yet. We've come a long way, but we're not there yet,
Of course, AI specialists will say that they are not trying to model the human brain, but it's behaviour in various contexts. But I contend that not being able to understand how non-binary decision making is performed in a real brain, any AI model will just be over simplistic that may be able to simulate some types of decision making, but cannot take all of the factors that impact human thought processes.
For example off the top of my head, if a real person suffers serious embarrassment in their past due to a decision they made, future decisions in a similar area may appear more irrational, and it may prove to be a difficult task to model that emotional effect on their thinking. Human thought is not always rational, and this is not just as a personal level, but you have have irrational group think happening all the time (you only have to look at social media to see this!).
For creative tasks, this irrationality may be the key to success in humans, which is why AI produced media seems so derivative compared to that created by people. Building in irrationallity into AI models to try to simulate irrational, or maybe out-of-the-box thinking is unlikely to be helpful for legal pleadings (or maybe it is in some cases, I'm not a lawyer). So we need different types of decision making for different types of problems. We see this in people (afer all, not everybody can understand a reasoned argument, let alone create one), so why should a general purpose AI model be able to do this.