* Posts by NNSS

1 publicly visible post • joined 21 Sep 2018

UK cops run machine learning trials on live police operations. Unregulated. What could go wrong? – report

NNSS

ML is a tool, like statistics and a vast array of other analytic tools. None of these have specific regulation or codes of practice.

Current research in ML for forecasting high-harm domestic abuse recidivism indicates it has a true positive rate an easy three to four times greater than human decision-making. So using it should improve matching resources to risk and reduce harm.

Compared to human decision-making, ML is actually easier and cheaper to adjust. Say you have a mid-sized police force with 1,000 officers and 10,000 domestic violence incidents a year; you can pay a software engineer 10 grand to remodel the RF plot to avoid unfairly targetting one demographic or other which will have an immediate and permanent effect; retraining current and future staff, and monitoring to ensure that the retraining has been effective in adjusting their decisions, will cost a lot more.

The arguments against ML are also arguments against human decisions. Human decisions are based on limited prior experiences which are filtered through perceptions, heuristic biases, inherent prejudices, cognitive limitations. Although human decisions can be explained, these are often post-decision justifications rather than genuine and transparent demonstrations of thought processes.

ML (or at least the packages currently available for Python) isn't ideal for predicting extreme rare events, but it's better than current practice.

Quite simply, the risk assessments and decisions are going to be made regardless of whether ML is involved. The option is, do you want those decisions to be made by some idiot behind a desk with no comprehension of how the myriad of factors affect each other, or by the same idiot informed by best quality data analysis? (where, hopefully, the anchoring effect should put them at least in the right ballpark)