Here are some of my ramblings - why think of ML as software and not as of a simple mind? When you learn to drive, certain neural structures are created in your brain, their connection weights adjusted and eventually it's robust enough that you can drive safely, though still susceptible to illness, seizures, tiredness and other biological factors (let's ignore things like other drivers being tits). Can you verify these neural structures when you teach a person to drive? A poorly trained "AI" (please excuse my careless use of this term for the sake of simplicity) will suffer from conceptually similar problems.
As soon as the given AI can drive at least as safely as an average human, it should be ok to use it in a self-driving car. There's still a chance for it to crash, but so is there when a human is driving. All that you require of the AI is sufficient complexity and learning experience. (You might think that say a chimp's brain is incredibly complex compared to our most powerful computers yet they can't drive a car, but then again a lot of their available computational resources are used for other processes whereas the aforementioned AI can have its sole purpose to be the driving of a particular car).