"Compared to human decision-making, ML is actually easier and cheaper to adjust. "
One of the biggest problems is that it exacerbates existing human biases against certain groups.
Meaning that if the actual level of crime is about equal between racial groups but racist policing results in black criminals being targetted at a rate ten times higher than white criminals, then ML will set this racist policy in stone.
That's the point you've raised about arguments against ML being the same as arguments against human decisions - but that misses out the fundamental difference that a substantial chunk (most?) of the population tend to believe that computer decisions are unbiased/fair/accurate rather than full of human biases. The phrase "Garbage in, garbage out" has never crossed their hearing.
These are the same guileless people who believed for decades that DNA/fingerprints were infallible because the experts told them so and who believe that radar speed checks are 100% accurate because governments passed laws preventing independent experts explaining to courts the myriad ways they can be fooled into giving inaccurate readings. (This is particularly the case in Australia - passing laws to negate the laws of physics is a pasttime that goes back quite a while there)
Yes, ML _can_ be adjusted more easily than human decisions. But that assumes you know about the human biases it's been trained on. You may know about some of the, but you're unlikely to pick up on all of them - and paradoxically once the biases are largely eliminated you're likely to start discounting events like it fingering several of Durham's finest citizens(*) as kiddy fiddlers as "computer glitches" because they conflict with YOUR biases and it's known it was biased in the past.
(*) I picked this ficticious example based on the recent US Senator case, and dozens of cases involving "respected members of society" where the victims were disbelieved and persecuted.
"The option is, do you want those decisions to be made by some idiot behind a desk with no comprehension of how the myriad of factors affect each other, or by the same idiot informed by best quality data analysis? (where, hopefully, the anchoring effect should put them at least in the right ballpark)"
So, when the investigator realises via ML that not only is XYZ politician is bent, but so are 30 of his cronies, most of the local chamber of commerce are paying bribes and Fat Tony's been enforcing things via "disappearances" for the last 30 years, does said investigator say nothing and close the investigation , put his entire family at risk by continuing, or arrange for the collated data to make its way into the public domain via untraceable sources?
Sometimes it's better that the investigators NOT find the full story until it's too late for them to get the hell out of Dodge City.