"
Does the machine choose the greater good - or avoid a direct action that would deliberately kill the man on the spur?
"
It follows the algorithms that the human who programmed it wrote. If those algorithms have unintended consequences, it's the result of the programmer's lack of foresight rather than the fault of the machine.
Machines do not "make decisions" and are not likely to do so in the foreseeable future. They just follow a pre-programmed algorithm - albeit one that may be pretty complex.