Does the machine choose the greater good - or avoid a direct action that would deliberately kill the man on the spur?
A dilemma which the logic in self-driving cars will have to incorporate, unless we choose to refuse to incorporate any concept of "the greater good" which is itself a decision. Will it be explicit (programmer playing god, here's the algorithm which decides whether you live or die)? Or implicit (the AI has programmed goals and code that evolves in time as it processes more events, and we really don't know in advance what it will do faced with a choice between two different crashes).
I once found myself in a meta- version of this dilemma. Thanks to my own inattention, I was hurtling towards a give-way sign much too fast to stop and realized I might have to decide between a collision with another vehicle or going off-road into trees. I never got to take the decision, because the fates or whatever decreed that there was no other vehicle crossing my path.