Apple thinks my dog is a person
The Apple face recognition in Photos thinks I am three different people, my sisters two different people each and thinks one of my dogs is a person every time.
A group of researchers have inserted a backdoor into a facial-recognition AI system by injecting "poisoning samples" into the training set. This particular method doesn’t require adversaries to have complete knowledge of the deep-learning model, a more realistic scenario. Instead, the attacker just has to slip in a small …
A friend in Toronto, who specialises in custom optical frames, something the Brits call 'bespoke', for customers with vanity concerns, those who require specialist frames to accommodate medical or deformity needs and the film industry.
He took an eyeglass frame similar to the heavy frame in the article and he inserted very small Infra-Red LEDs in the Frame front (front part of the eyeglass frame that holds the lenses in place and bridges the top of the nose), eye wires (rims) (plastic parts of the frame front into which the lenses are inserted), the bridge (The area between the lenses that goes over the nose and supports 90 percent of the weight of the eyeglasses).
He cut a hinge in two (Part of the frame that connects the frame front to the temples) through which he fed power to the LEDs from a Temple (part of the frame that extend over and/or behind the ears to hold the frame in place) and an End Pieces (extensions of the frame front to which the temples are attached).
Since he had done similar jobs for films, the work was hardly a challenge. He made a few pairs for me and it only cost about $200 labour.
When pictures are taken with an 'electronic' camera, the effect is quite stunning and actually cause the lens and eyes to appear dark. However with a film camera this effect disappears, or rather is not recorded.
I was only just thinking that this would be an effective way of being CCTV facial recognition systems when i read your post, so obviously someone thought of the idea before me.
I know that some cinemas use a similar technique with IR LEDs around the screen to stop people being able to record the movie with cameras.
I expect in a year or two you will be able to buy such glasses mass produced from China for relatively low costs as the tech is not overly expensive. But then perhaps if it becomes a problem for the authorities they will adapt facial recognition technology to combat against this technique.
I thought that most cameras these days (not extremely cheap ones) have IR filters on them, so that they don't pick up on IR. I haven't seen any IR on any camera, be it phone, web or DSLR. But I do remember years back seeing it on a webcam.
There are exceptions, ones that use IR for lighting in the dark and for tracking etc, but then they are not used for picture taking.
Most small cameras don't have IR filters on them (compact cameras and phone cameras), this means that the IR light isn't being blocked and so will show up/
You can test this by putting the camera in to Video mode and then pointing an IR remote control at them and pressing buttons, you will see the LED flashing.
This is very useful for testing to see if the batteries are working.
Same AC, that is how I would test remotes also, but for years now, on every device I have used doesn't show IR, the only thing that I have bought recently that does are 2 cctv cameras, they were cheap from china, they see IR, but need to because of they using that for its 'night vision'.
The last compact camera I bought years ago (over 10) Canon ixus 250 has an IR filter. Nor does IR show up on my mobile. Webcam on the computer logitech c525 has an IR filter.
An old trick.
Anyone who has worked on AI systems, including but not limited to the training thereof, will be well aware of the possibility to abuse a training set. It's a simple-and-easy cousin of the hard problem of valid statistical sampling. A commentard who mentions Al Qaeda and the CIA in a single sentence to enlarge the haystack for the spooks to inspect is practicing a (weak) variant of a similar trick.
The important question is not that that this can be done, but whether and by whom an operational system can be subverted. The article mentions a facial recognition system, but doesn't tell us whether it was actually hacked, or whether they just demonstrated a trick equivalent to gaining root access by having physical access and root password.
Though it does tell us that the system in question is less robust than one might wish, if just five samples could subvert it. That is, assuming those five samples were additional to what had been supposed to be a robust training set.
 But still posting anonymously, because that will be picked up by AI, with or without my name on't.
The robustness or otherwise is the important feature. Social engineering might be able to get the poisoned training data into the database, but instead of adding individuals, this shows that it's possible on current systems to add 'anyone wearing these glasses'.
I can't remember the film. But in a society with ubiquitous surveillance, the system was trained to ignore anyone displaying the super-secret 'ignore-me' graphic pattern. Resistance fighters were able to move freely and invisibly by painting the correct graphic pattern on their faces. The system then deleted their images from any recordings it made.
Biting the hand that feeds IT © 1998–2021