Re: Hard decision but Mercedes are probably right
Completely agree. But I think the most important angle is the legal angle. As soon as you consider that, then you realise that this whole moral madness is just a can of worms with no bottom. It is better to just try to stop as best you can for ANY obstacle, and not even try to think about this stuff.
1. As you say, the car's sensors cannot possibly ever be perfect. It could quite easily see a paper bag, a cat, a dog, running across the road and decide that it is a human infant. It may then decide to take extreme evasive action to avoid the infant, and end up "deliberately" killing its driver. A human could make exactly the same mistake of course (and probably does, in all of the RTAs of the world). The difference with the human, however, is that you can't then download the black-box data out of his squished brain, and replay his fatal mistake in front of a court of law. The car manufacturers will be *terrified* of this second-guessing by the courts, and of course the lawyers will be salivating just thinking about it.
2. Yes, pedestrians have the same responsibility to avoid danger as anyone else (young children are not directly responsible - their parents are responsible FOR them). The highway code is there for a reason. If a driver is not driving dangerously or carelessly, but still kills a child, then he should, in theory, be found NOT GUILTY, by the court - because the child or their parent should have been paying attention to the inherent danger of the road. Unfortunately in many cases (because the courts are not perfect), he will be found guilty of something and go to jail. If the driver was in fact a robot operated by some company, then one corporate manslaughter case could sink even the biggest of companies. (even if it was incorrectly judged, by the jury of falliable squishy things, which statistically some of the cases will be) I think this is one of the biggest obstacles to the widespread adoption of fully autonomous cars - that the courts have an unacceptably high error rate and that even a "perfect" autonomous car will some day be found guilty.
3. It IS a hard decision, but I would be more afraid of the car that is clever enough to be able to try and make it, than one which is "dumb" and doesn't even go there. The reason is this: If a car is able to evaluate in real-time, the value of all life around it, and try to prioritise the teenage kid walking into the road over the elderly driver, a "criminal", or any of the other ridiculous scenarios you will find over at Moral Machine, for example - then it must have extremely sophisticated social profiling built in. (or perhaps even outsourced to "the cloud" - even worse). This opens up all kinds of evil possibilities. People complained about a "racist hand dryer" whose sensor failed to properly detect black skin. You now have the possibility (or rather, certainty) for a car to deliberately kill a human just because they were profiled to be "less morally valuable" than some other human.
There will be inevitable imperfections in the profiling algorithm and its training data. It *will* be more efficient at profiling the kinds of humans in its training set than the kinds of humans outside of the training set. This is now automatic discrimination against any kind of human who was not included in the training set. Remember the Google Photos app that identified black people as "gorillas"? Well unfortunately they did not have enough black people in the training set, but apparently they did have gorillas. For Google, that was a serious facepalm. If that was an autonomous car though, then once you factor in the lawyers, it's racially motivated murder on the part of Google.
Worse still, the system could be deliberately modified by some evil human. If a large number of a particular manufacturers' autonomous cars had the software capability to profile people and decide on their moral value, and someone maliciously issued an over-the-air software update to all of these cars simultaneously, then that malicious person could even attempt genocide. Very scary indeed.
Basically, If I was buying an autonomous car, I would want it to protect ME. (humans are ultimately selfish, no matter what you say) and I would certainly not want it to be second-guessing my life over what ultimately could be just a paper bag. And humans are the customer, so this makes perfect commercial sense.