I always knew computers were racist.
Unlike a piano, all the keys on my keyboard are white.
A new computer vision technique that helps convert blurry photos of people into fake, realistic images has come under fire for being racially biased towards white people. The tool known as PULSE was introduced by a group of researchers from Duke University, and was presented at the virtual Conference on Computer Vision and …
By your definition, to be bias the system would have to be rejecting non-white faces in favour of the white ones.
Is that's what's happening?
I've opened both images in an image editor, used a colour picker to select from the cheek, forehead, chin of both images.
- Forehead in the light, both are the same colour.
- Forehead in the shade, both are the same colour.
- Cheek on the left, the selected image is a darker brown than Obama (but similar).
- Chin, centre, Obama is a darker brown than the selected image (but similar)
- Hair, both are the same colour
Where's the bias? Hell, where's the skew?
It doesn't look like it's in the software if people think one image is black and the other white.
It's not my definition, its Merriam-Webster's.
The bias is in the training. The GAN learns from the training data what a face looks like. If it sees 90% white faces it will favour white skin in its definition of "face". This isn't a new revelation - it's a well-known phenomenon as the journal article referred to in the Reg article states. It's basically a manifestation of the old maxim "garbage in, garbage out"
You are correct that the training data appears to be bias. You are incorrect that this is a proof or example of this. It may also turn white people black, or other types of face around to others entirely different.
Proof would show the % of people it gets wrong. Also as posted above, I would think in part lighting may have an effect, and change more than the actual skin tone does!
"at the scale included in the article the image of Obama and not Obama are very comparable indeed."
If you squint and blur your vision, the general shape is very good. But the skin tones are significantly lighter in the generated image. That's the bias showing. Generally, when upscaling or zooming a poor image to enhance it, you want to create new pixels between the larger pixels which are averages of the general area of the image. How you can interpolate lighter pixels between darker areas such that the average across the whole area becomes lighter is beyond me.
It'd be interesting to see what it does to a pixelated image of Trump after he's just come off the sunbed and is at is most "orange panda-like" best. It'd probably put glasses on him :-)
So if the police put in a blurry picture of a BAME suspect then they get a high definition picture of an imaginary white suspect back? Surely this will work to counter-balance any potential discrimination or structural racism and make the world a better place. Or its just a complete piece of junk that should have been tested properly before being released on the world.
Honestly, AIs are not racially biased, they can be "color-blind" (ESPECIALLY when photos are taken in varying lighting conditions). They focus on feature recognition, since they decide on their own what features to look for they can ENTIRELY miss the point sometimes. So, you look at these photos and obviously it's not the same person. You look at FEATURES, and they are surprisingly similar. The eyes in the photos are not brown and blue, to me they both appear black due to lack of resolution. The ears are very similar, the pose is identical, they have the same hair line (including this triangular bit hanging over the forehead), and the lighting they both have a shadow in the top-right corner, the right side of the forehead.
Don't get me wrong, it definitely shows a big problem with facial recognition systems; I'm not a fan of them for privacy reasons either.
One tale of woe regarding AIs.. 10 or 15 years back, the military (don't know if it was US or UK?) was going to test a neural network-based "friend or foe" system. They brought out various airplanes onto the tarmac, took photos to feed in. They train this thing, test it in the wild and it DOES NOT WORK AT ALL. It turns out, most of the friendlys were photographed in the morning, and the rest in the evening, so ALL the AI was basing it's "friend or foe" on was if the plan was lit up from the left side or the right side, it was not looking at what kind of plane it was, the plane markings, etc. at all.
Actual intelligence would recognize the image and fill in the details from memory.
However, an actual intelligence seeing a face which it couldn't recognize, never having seen it before, would do no better than this.
As much as we'd love to extract from an image details which just aren't there, and aren't anywhere else, it can't be done.
Another promise, one which was never believable to begin with, broken.
RLWatkins said: "Another promise, one which was never believable to begin with, broken. Yawn"
Their website states "PULSE makes imaginary faces of people who do not exist, which should not be confused for real people. It will not help identify or reconstruct the original image."
It does what it says on the tin. Any disappointment on your part is based on a fundamental misunderstanding of what they set out to achieve.
Well exactly. The narrative is that he's black but the truth is that he's pretty light. AOC is basically as white as me; the features that make her look Hispanic are pretty slight and are gone in the pixellated photos.
The real story here is that some people just want to make everything about race.
TBH, if you didn't know that AOC was hispanic, a HUMAN would be hard pressed to determine that from the sample photo.
The truth as I see it, is that the people making these observations might have an actual point, but their examples are so weak that they are not really making much of a case. Given the fact that both AOC and Obama are famous, it is extremely difficult for people to discount their preconceptions when looking at the performance of the process being debated.
It would have been better to pick the faces of strangers that are more recognisable as dark skinned to make their point.
OpenCV's Haarfilter is designed to recognize faces. Trouble is, the analysis is based on grey-scale images. So a kitchen-cupboard with two knobs is also recognized as a face. Now compare the colour of the inscribed ellipsis of the 'face' to the human skincolours which all lay in a well-defined column in RGB-space and you have a DIY face recognition system.
Can we assume that the dubious actors behind Zoom (and creators of other videoconferencing stuff) are harvesting video of people's faces to sell for testing/training by facial-recognition software companies? See how well they do with various cats/ dogs/ kids crawling over the keyboard in front of the webcams.
Not really sure that I understand the accusation here.
In all the examples presented, the selected images are extremely close matches.
Now, *we* know that the models are "black" (although most of them here are actually fairly light skinned), but the sample photos are taken with very bright lighting.
> The computer-generated face obviously doesn't look anything like Obama at all.
I'm not sure what the article author is looking at, but that match is extremely close to the very pixelated image by the side. *We* know that it is Barak Obama, so we know that the selected image is not Obama, but I don't think that the software is going to have that much contextual information.
Let's see some examples of what it chooses with a photo that is not taken with very bright lighting of someone that really has dark skin.
I think the complainants are being pretty disingenuous here.
"Race doesn't matter, race doesn't exist..." people keep telling me.
"OMG a computer program made a fake person and got the race wrong!" scream other people.
The program appears to try and identify the skin tone and give features of the appropriate race. Light brown in bright light looks the same as 'white' skin in poorer/darker lighting.
Most lighter-brown people in America will have white ancestry as well as African. In the example of Obama, he had a white mother and black father. Complaining it made him "black and that's not his race" would be equally valid... except he's chosen to identify as black.
As much as this shows up the issue of data sets causing bias it also shows up the trend of making race a discreet characteristic (you're either this or that) when it should be a gloriously messy irrelevance.
I am short-sighted, so I did a very simple experiment - I removed my glasses and looked at the Twitter images. And for the life of me, if I didn't know they weren't the same person, I would have said they were two differently-pixelated pictures of the same person.
..there's some rare amusing fun wisdom in this Artificial Stupidity, Nobel Prize Recipient Obama was raised in a White family, he committed his own war crimes by covering up W.'s war crimes, by going after whistle blowers like Pfc Manning, Snowden, Winner, Assange etc, rather than Qualifying him self for said sad Nobel Prize, by going after All the war criminals.
Lets keep it real, he's mother is white, so he is half and half. His features and skin tone is a mix of both.
The facial recognition software should completely skip using skin tone as a basis, due to the fact most people that are not straight pale or pure black. they they can and do dramatically change color depending on how much sun you get.
Mathematically it is a disaster to match skin tone. Much more accurate to go by dimensions of facial features.
Biting the hand that feeds IT © 1998–2020