Too soon
I'm struggling with this whole 'safety monitor' concept. Work with me here:
It's day 32 of this initiative, and I've been sitting behind the wheel for a total of roughly 300 hours cumulatively, and the thing hasn't malfunctioned yet. But today, a drunk driver loses control in front of me and swerves into my lane, and is about to cause a collision. My attention is all over the place. Maybe I'm bored to death and that boredom is making my attention wander, maybe not.
Let's start a time loop, and ask which of these scenarios might happen:
Iteration 1:
The safety driver has been instructed to let the machine drive, and intervene if it doesn't handle it appropriately.
T + 4.03 seconds, the driver has decided that the machine isn't reacting appropriately
T + 4.3 seconds, the driver tries to react but it's too late to prevent a tragic accident.
Iteration 2:
T + 1 second the drive sees the drunk is about to do something stupid and decides his life is more important than AI training
T + 1.23 seconds the driver reacts and prevents the accident, but now the AI hasn't actually proven that it would or would not have recognized the threat and responded appropriately, thus defeating the point of the initiative.
Iteration 3:
Like one, but the driver was bored to tears because no mortal human can maintain that level of attention for hundreds of hours sitting in front of a wheel they're not even steering, so he never sees the drunk coming and tragedy ensues.
This is merely one of the several major categories problems with AI drivers. If they're not already competent to drive without human minders, they shouldn't be on the road.