Re: "...ignorant cyber-Judge Dredds"
The problem with AI for this kind of task is that it only "knows" what it has been trained to recognise and can only act within the parameters of its knowledge. Humans, at least in some situations, are aware that things may occur that are outside their previous experience and, even if they are reckless about the safety of others, are going to avoid situations that may result in their own injury.
Although not actively murderous, that kind of AI has an evolutionary tendency to eliminate the things that conflict with its "world view" - that's just the inevitable consequence of coupling "failure to recognise" with a heavy moving object. It's not just that the AI isn't sufficiently well trained: there are odd unexpected circumstances that will occur for which no reasonable amount of training data will ever be available. It seems to me to be the wrong sort of technology for a safety-critical system, except, perhaps as assistance ("it looks like a child behind that car") to a human operator.