Re: Meaning of Bias?
I'm pretty sure in this case the "bias" in this case (and most facial recognition software in the west) is that the training set contains mainly white faces. Thus it's trained to differentiate between whiteys, but since it's been trained on very few black/latino/asiatic faces, it poor at differentiating between them.
Quite a few manufacturers of said software will talk happily about how accurate it is on test data, but when given real world data, like photos other than straight on to camera, or people wearing more/less makeup, they can be more circumspect. When you can do tests (and they are strangely reluctant to do this) and it gives a 90% likelihood of *any* two black fellas being the same person, it can cause real problems.
"An insurance company will be "biased" against an 18 year old man from a poor estate with a turbo charged car, when he attempts to get car insurance - even if he is a safe driver."
Um, well, no. You may have missed it, but you cannot discriminate upon protected characteristics* even if they are statistically valid. So that it's a chap and 18 years old are *specifically* prohibited from being allowed in your consideration, and you should be required to prove your algo also doesn't discriminate on that basis. You can judge on car model/make/age, persons income, amount of driving experience, past accidents, past claims etc. It's been suggested that if you allow the insurance company to see your driving data (as collected by your car) then they can use that to judge how safe you actually are.
It's also impossible to describe anyone with less than 10,000 hours behind the wheel as safe, since there is simply not enough data to base that on. If said chap was a kiwi, had been driving since he was 15 (unless that's been changed), had passed a defensive driving course and his job requires a licence then perhaps.
* gender, race, religion, age, sexual orientation and membership of political organisations.