"Does the use of LFR have a basis in law?"
Maybe not yet but that's easy to fix without making waves or incurring costs. After all, Rwanda will soon legally be a safe country for refugees regardless of any facts.
A UK committee in its upper house has written to Home Secretary James Cleverly to warn of the lack of legal basis for the use of live facial recognition by police. The House of Lords' Justice and Home Affairs Committee told the Conservative member of parliament that Live Facial Recognition technology (LFR) — which compares a …
And schools. Don't forget schools. I first saw this in action a good few years ago. The CCTV system was live tracking and drawing green and red boxes over detected faces. I didn't ask, but my assumption was that green boxes were faces either properly identified as faces, or possibly had identified the specific person, red for those it either wasn't sure was a face or it had not identified, not sure how advanced it was, but I couldn't think of an obvious reason for it to be boxing in faces in both red and green unless it was identifying specific people, looking for people not in the database.
LFR use anywhere should be banned until it is 6-sigma accurate, doesn't have any baises (racial, women etc,), and is not affected by variations in lighting and weather. Can't do that? Get lost.
Cops and spooks don't care about any of this, however. They'll happily ignore any such mandates and carry on as usual.
The rich can ensure their faces are excluded from any training data. Given that they typically eat well, have good routines and magnitude lower levels of stress their faces significantly differ from the rest. You can tell easily when looking at someone that they are wealthy (with some exceptions).
So it is given that such system will have a bias in that area.
Rich people pay good money so that none of their photos are ever available on the internet and if something pops up their agents are on it in an instant.
The big problem isn't that your pic appears in the data. It is that the pic of someone who is not you but looks a bit like you is in the data.
In that case, however innocent (or rich) you are you will be stopped at every street corner. Forever.
And one day you will lose the dice throw and will be imprisoned for someone else's crime. Unless you are in the USA in which case you will just be shot by mistake, or while resisting arrest by trying to explain you are the wrong guy.
That is my point. If it was not trained on the rich, if the rich person's photo that will be given to the network to find in the crowd will more likely match with a face it has been trained on or give some unpredictable result. It will have lower chance of finding the rich person.
So someone will definitely be stopped at every street corner.
The AI will be blind to rich people.
You need to remember that laws in the UK are for little people.
Discussion is seemingly designed to distract from the main point:
Who won the tender if there was one and why this has even been commissioned without any legal basis.
Given that the country is run on brown envelopes, we probably know the answer.
'... "a resilient and highly accurate system" to search all databases of images the police can access.'
The great bastion of democracy, China, has been doing this successfully for ages with no issues so why shouldn't we? I'm sure Hikvision have a few camera systems they can sell off cheaply.
The problems of comparing a live feed against a watch list have been discussed for a long time. Schneier has been talking about it for decades.
If the list is small compared to the population in the feed, it's pretty easy to show that false positives will outnumber true positives - unless the accuracy rate is much better than anything we know how to do.
The only thing it would be useful for is as a first-pass scan. It's basically an automated version of a policeman comparing everyone who passes a checkpoint against a set of photos, and flagging anyone who looks like one of the photos.
If we had humans doing this, we would be more somewhat more reasonable about its value, since we know what humans are like.
But "technology" (particularly "AI" these days) makes things seem artificially reliable.
Combine it with jobsworth bureaucrats, and yeah, there probably will be people bunged up by mistake.
One of the takeaways from the Horizon scandal is that for some bizarre reason "the law" is inclined to believe computers just because they're computers. While this might be true for trivial situations -- simple adding up like a calculator, for example -- there's ample evidence and precedent to know that a computer's opinion is, well, just an opinion and without corroboration or other proof of correctness its just as likely to get things wrong as a human (or, more accurately, its human handler/programmer).
So fretting about facial recognition is pointless. Its just the machine expressing an opinion -- it thinks to some statistical accuracy that the face it sees belongs to a particular person. Without proof its just an opinion. What needs pushback is the idea that just because its a computer its flawless and has to be believed without question. That's wrong in so many ways -- especially with complex, fuzzy logic tasks like this -- that we shouldn't even be discussing it.
Facial recognition is bad enough but it's biomechanical tracking that worries me more.
Facial recognician can be foiled by a hat or a scarf you can't do that if you are being tracked on how you walk.
It's already in use in lots of places.
The paranoid cynic in me wonders if facial recognition drama is bieng talked up to slip biomechanical tracking in through the side door.
In my opinion, with the number of prison spaces severely decreasing in the United Kingdom and the British prison system under pressure due to overcrowding, using live facial recognition to capture even more criminals doesn't make sense. How accurate is this technology? Witnesses sometimes make mistakes in prison lineups, and there have been cases where facial recognition has erred or faced difficulties when presented with individuals from minority communities. Furthermore, could this technology of live facial recognition be implemented into the vast United Kingdom CCTV network to track known prisoners living freely in the community?
Live Facial Recognition is based on a list of criminals who have already been caught, not on identifying new ones, which, in my opinion, contradicts efforts to rehabilitate these individuals. Should these ex-criminals always be marked as criminals for the rest of their lives? Will their images be on live facial recognition databases until the age of 100, similar to the lifespan of a criminal record in the United Kingdom?
Considering the strain on the prison infrastructure and the potential inaccuracies and ethical implications of live facial recognition, it's clear that alternative solutions should be explored. It's crucial to prioritize rehabilitation and reintegration of individuals into society while ensuring that law enforcement methods are accurate, fair, and respectful of individuals' rights.
"Live Facial Recognition is based on a list of criminals who have already been caught, not on identifying new ones, which, in my opinion, contradicts efforts to rehabilitate these individuals. Should these ex-criminals always be marked as criminals for the rest of their lives? Will their images be on live facial recognition databases until the age of 100, similar to the lifespan of a criminal record in the United Kingdom?"
Good point, and why we need some sort of legal basis, oversight regulator and proper, standardised training. At the very least, faces of "known criminals" should be expunged from the database after all convictions are "spent", ie they have stayed on the straight and narrow (or at least not been caught since!)