Jack the Ripper
There have been many attempts at discovering who Jack the Ripper was and writing a book about it. Names (of dead people) have been named. The method seems to be think of a possible perpetrator then collect/invent/suggest evidence. AI machines are clever guessers with encyclopedias. It's statistics. Insurance companies and mortgage lenders have been using statistics and data for a long time. They statistically infer certain risks from you being married or single. We know sometimes the data these systems use is perverse when applied to individuals. (Good at paying-off debt? That's 10 points. Never had any debt? Ohh! Dodgy.) Then (still not yet in artificial territory) the fact that you visited a web site about drug rehabilitation or went on a stop something protest gets added without your knowledge and without any real reason to your background checks. Now we're into bad-lands. It's not the clever-guessing AI that's the problem but the data-hoovering and ability to correlate it and then come to some judgement. (With plenty of opportunities for circular 'reasoning'.) "Which of these five people pictured is most likely to ..." is a matter of prejudice when done by a human, and a matter of clutching at opaque statistics when done by a computer. It's easy to see that this sort of thing is desperation. But there's an everyday use which is just as pernicious and that's red-flags for organisations without any intelligence. Hello Social Services, I'm looking at you. A red flag should be a prompt for investigation not a one-size-fits-all response. When the staff have no time or skills or trust then the 'safest' or cheapest option will be rubber-stamped.
Conclusion: Opaque and unjustified data collection is the main danger. Letting 'the system' decide is cheap and 'not my responsibility'... But not fit for purpose.