The biggest problem with any kind of ML or AI. Unverifiability.
A bit like Human L or Human I, then. We definitely shouldn't let humans do anything as risky as driving cars.
There is literally nothing to stop a
ML or AI agent human from suddenly throwing out a completely random answer purely because the input wasn't in its training, or wasn't in the same kind of pattern as in its training.