Well thank goodness that this decision was made by humans and not AI.
Or was it? Hmm....
The UK government is backing away from proposals to remove individuals' rights to challenge decisions made about them by artificial intelligence following an early analysis of its consultation process. In its response to the consultation "Data: A new direction", which set out proposals for changing UK data protection law …
Seems like this government would like to remove the powers from anything that has the ability to challenge them or find against them (ICO, courts, ...)
Added: Whilst at the same time selectively "dis-applying" laws that they themselves have introduced (I'm looking at you, Boris).
"government would like to remove the powers from anything that has the ability to challenge them"
The government has freedoms in any case that exceed those of commerce.
In actuality, the 'liberalisation' measures proposed in the consultation were primarily driven by pressure from businesses that find the current regime limits their ability to exploit personal data lawfully, or that find genuine compliance arduous. The sad reality is that practically no business has to date ever been compliant with the legislation.
Rather like the attitude to speeding on the roads, the idea was to relax the law to accommodate the non-compliance. Pretty much the entire consultation was couched in terms of benefits to and convenience of data controllers and processors, not protection of data subjects.
And unfortunately, despite this retrenchment, there are lot of other nasties among the proposals that may still slip through into amended legislation.
> Article 22 of UK Data Protection Act 2018 – which outlines subjects' rights to challenge automated decisions – should be removed
...Which would make arbitrary, unjust and discriminatory decisions perfectly legal and binding. All any entity who doesn't like some specific skin color/sexual orientation/religion has to do, is use some simple screening software automatically and irrevocably rejecting those candidates: Sorry, computer says no.
> documented cases to date of facial recognition software discriminating
Obviously, they all carry the biases of their training datasets: Train them with predominately male WASPs and they won't be able to process (let's say) a Polynesian female, simply because they haven't ever seen such a person ever before.
This problem won't change anytime soon, because changing it would cost too much money and management doesn't see why they should spend it, just for some abstract notion of ethics. Not when we're speaking quarter results and bonuses here. So biases are and are bound to remain the cornerstone of AIs worldwide.
I can understand and accept that it might not be possible to understand how AI comes to a particular decision, however, all decisions have to be rational and within the law and terms of contract. If a human cannot reasonably review the input evidence and path to acceptable outcome then something very bad is being hidden in the complexity.
Where I see the value in AI is to perform preliminary categorisation. In anything where there is a high workload for individuals - think some poor schmuck processing hundreds of forms a day - initial categorisation or second opinion (either AI checks user or user checks AI) can add value. You could have AI (or more just heuristic questioning) help in triage to prevent options being missed. AI replaces user is stupid though.
Use AI as an initial filter, but any "no" generated in cases with any legal or financial implications must be reviewed by a competent human. Not a bottom-of-the-ladder wage slave. Someone proven qualified to understand and evaluate all the issues at hand.
With personal liabilities for anyone giving a fraudulent, biased, or inexplainable "no" following the review. It's too easy to fuck someone's life up with no comeback for a bad decision.
We should have the right to challenge decisions “which have a legal or similarly significant effect”* on us, no matter who or what made it.
All the examples in the link “Article 22” could be done in the same way solely by a human meaning the human is not really deciding. Therefore we should always have the right to challenge decisions which have a legal or similarly significant effect”* on us.
*From article link “Article 22”.