
Why do the persist trying to put it back in the box?
It's coming, it won't be perfect at the start but honestly it will save lives and fight crime. Nurture it and see it grow.
Companies should think twice before deploying AI-powered emotional analysis systems prone to systemic biases and other snafus, the UK's Information Commissioner's Office (ICO) warned this week. Organizations face investigation if they press on and use this sub-par technology that puts people at risk, the watchdog added. …
"it will save lives and fight crime"
Oh yeah? What's next, the Pre Crime Division?
This pre-crime division has no pretense of being able to solve the basic incompetence of the rank and file, so they have concluded that the only hope is for the watch commander to tell the knuckle-draggers which corner to stand on at the beginning of their shift if they hope to make an arrest.
They aren't there to prevent crime, after all. If that was the case they would be required to intercede if they witnessed a crime in progress, or protect someone who is being threatened.
You can't train it on the best humans because they can't even train other people, and few of them can maintain that status over time and any degree of scrutiny.
The TSA helped create that "Super observer" meme where people presented themselves as human lie detectors. Most of the original cohort have been debunked at this point, and the ones that haven't are just hiding to protect their image. They have since had to reverse their position on the program but some of the same faces are now showing up linked to these fake AI projects.
So an AI could observe your hand selected cohort, but without anyone who can explain how it works you just amplify random traits with a weak statistical correlation until your biases are all that is left. If these systems get the right answer some of the time, it's by accident. The basic methodology is wrong.