Rebrand opportunity?
They could make a clean breast of it and rebrand as Titter?
Twitter says its AI that automatically crops images in tweets didn't exhibit racial or gender bias when it was developed – even though in production it may prefer to crop out dark-skinned people and focus on women's chests. The social network acknowledged it has more work to do to address concerns. "Our team did test for bias …
Or maybe they'll drop the "titter" letters and rework the remaining letter into a shapely logo? It is a good deed to forget a poor joke - Brendan Behan.
But seriously, what does this sort or programmatic error tell us about the code writers? If women stood a better chance of getting a job as a programmer then would this stupidity vanish?
This post has been deleted by its author
Whose blooming image is it anyway? What earthly right has the twit to decide which parts of an image to display. By all means suppress the entire image if it doesn't pass "censorship" but displaying a tampered version is an infringement of the originator's right to freedom of expression, as it can change the meaning of the image.
As a demonstration of how bad this can get, there's a fascinating book The Commissar vanishes by David King (Henry Holt, New York 1997) which shows several series of group photos of Soviet officials in which members of the group are progressively replaced by blank space (or in one case a pot plant) as they were eliminated in Stalin's purges.
Why there is that “racism” brigade that always assume that somehow “racism” is involved in everything? They always sound like extremely biased and bigoted people.
Why not accept that the reason might be that the task of recognising black faces just being more challenging due to light properties?
It is a difficult task, it needs and will be addressed but it will take time.
Great phrase. I can just hear someone rolling that out for the abolition of slavery, votes for women, accessibility for disabled, and so on. To paraphrase, I'm fine with the status quo - we'll get to you lot later. Now imagine how you'd feel when consistently being told that you and your needs are lower priority because, 'difficult'.
Even if the system was developed by aliens who have never heard about human races, its performance in poor light conditions (backlight, low light, …) would deteriorate quicker for dark skin faces. That's just how optics works.
Frist you have to admit this. Then we can talk about what to do about it. Shouting racism does not help anyone.
Nobody is going to notice if the AI gets it right.
They certainly will when it gets it wrong.
There is no safe false matching rate and no matter how hard you try, your system is going to end up making visibly racist/sexist decisions.
Perhaps people should take this into account when deploying an AI system.
A system that makes obvious, but stupid decisions is going to get less flak than one that makes opaque but mostly correct ones.
Seeing as they haven't fixed the bias in the camera sensors, it's pretty much impossible to fix the after-effects. They just don't capture the same level of detail in darker skin tones.
That example photo is pretty bad too. Backlit, with half the man's face in shadow. If they wanted to make a point they should have compared it to a similar shot, not one that was evenly front-lit.
There are a lot of factors at play here, not just a single algorithm and this coverage is over simplistic.
Yep, backlighting is the curse of imaging at all levels. Too many video conferences are shot against a bright light, resulting in dark faces all round. Never mind the melanin, feel the burn. As usual its the manglers and sales droids to blame, they don't understand that technical stuff. And as for the ones who call in from their car on the way to the airport.....must control...fist...of death.....
Oh for crying out loud...the issue here is one of getting proper contrast for facial recognition. Anyone who's done facial recognition knows this. The whole "pale globe" comment is ridiculous. The picture in question shows a dark-skinned man in a very backlit situation, something difficult for *any* algorithm to lock onto. You'd get the same results if a white person was silhouetted. If it can't detect facial features due to poor lighting, it's going to lock onto anything that remotely resembles a face. It's not "choosing" a pale globe because it hates black people.
Yet another example of people getting offended about something simply because they're looking for reasons to get offended.
Bullshit, the issue is primarily feeding pictures of white people as training data. You telling me that all this fancy "AI" can't spot the difference in colours? It's a computer, it is capable of spotting the difference the human eye can't pick out. If the pixels are not identical it can "see" the difference.
Bias is in machine learning. In a neural net there can be bias weights used yes. Oh sorry you think the computer cares about your feelings? Cry me a river just not on the important electronics.
There is always one moron who is gonna claim racism. Reminds me of an idiot in a checkout line thinking he was stopped from buying the single can from a multipack (that he drank before reaching the checkout) because he was black. Moron knows no colour.