Re: AI main issue:
>trying to predict the future using the past.
Worse. Humans can come to recognise bias when its bad because we have morality.
The whole purpose of AI is to implement bias and it has no morality.
AI isn't intelligence, it is merely statistical analysis. Complicated stats, but still just stats. It can make correlations, but not assess causation. It is incredibly dumb.
Stupid and morality-free. It is only useful when you really don't care too much about the outcome.
If AI is used to determine if you get parole, it is because the judicial system doesn't really care about the outcome, only that the process is cheap. I think its fairly easy to assess the morality of those who commissioned that.
What happens when everyone moves to AI and we no longer have humans doing the job? Where do you get your training data? No-one can tell how decisions are made and there's no way of measuring quality of the output. How do you know if someone has found an algorithmic flaw and is exploiting it? How do you catch the outliers?
AI has a place, but there are dangers. One that immediately comes to mind is that we try to do too much and it pushes policy to places we shouldn't go. Why check only the fingerprints of criminals when you can check everyone's? What happens when systems trained on a little data from California are exported to Kenya? Does anyone know? What happens if you go back the other way - take your data from Kenya and use it in California?
Part of the problem is that the vendors hawking AI systems have no vested interest in their correct use. Problems are compounded when those buying and using such systems don't have too much interest in the outcome either, just as long as an outcome is reached and more cheaply than a human could do it.