back to article Twitter: Our image-cropping AI seems to give certain peeps preferential treatment. Solution: Use less AI

In an attempt to fix Twitter’s problematic automated image-cropping tool, its engineers said the software will in future simply rely less on AI algorithms. When previewing pictures on the social media platform, Twitter automatically crops and resizes the image to match your screen size, be a smartphone display, PC monitor, etc …

  1. Terry 6 Silver badge

    In another part of El reg the same day

    In brief AI software designed to monitor students via webcam as they take their tests – to detect any attempts at cheating – sometimes fails to identify the students due to their skin color.

    Products like ExamSoft are being used by coll.......

    https://www.theregister.com/2020/10/05/in_brief_ai/

  2. Flak
    Paris Hilton

    Who has been training the AI?

    While the algorithms are programmed, the 'learned' aspect of AI is effectively a black box. And this is both the clever bit and sometimes a problem.

    Deliberate or unintentional bias during training may then be ingrained in the system and cannot be removed or changed like some lines of code.

    Focusing on women's chests and lighter skinned people - I wonder who has been training the AI...

    1. Flocke Kroes Silver badge

      Re: Who has been training the AI?

      Sounds like they skipped the training all together. From the headline I expected that they had cropped a few thousand images by hand and used the original + cropped images as training data. Obviously that requires some effort so the decided to aim for the detailed parts of the image instead.

      For thousands of years women who want to draw attention to a particular part of their bodies have worn a pendant. This creates eye (and algorithm) catching detail at the exact point required to embarrass Twitter.

    2. druck Silver badge
      Facepalm

      Re: Who has been training the AI?

      You could train the AI by using software to track eye movement, showing what part of the image is looked at the most by real humans.

      What? You say the aim is to focus less on the chesticles?

      1. Cederic Silver badge

        Re: Who has been training the AI?

        You do (perhaps accidentally) offer an excellent suggestion though.

        Modern cameras have the ability to autofocus on the eye of the subject. I'm not aware of any differences between eyes that are driven by skin colour, so perhaps Twitter should perform eye detection on the photographs, then crop such that the eyes sit on (or average just below) the top rule of thirds line.

        If there's one eye, put it on a vertical third two, two eyes put one on a vertical third and the other in the centre, and more eyes than that, balance them centrally.

        If I knew anything about image detection I'd give that a go to see how it looks. Dear Twitter and Google, you may use this idea for the cost of buying a coffee for every homeless person in the nearest major town to your head office.

        1. DS999 Silver badge

          Re: Who has been training the AI?

          What about glasses, especially sunglasses that are reflective?

          1. Cederic Silver badge

            Re: Who has been training the AI?

            Well, there's always the existing algorithm to use in such situations. Although who knows how good spectacle detection is these days.

    3. TheMeerkat

      Re: Who has been training the AI?

      I suspect the woke brigade specifically selected pictures that would cause their “outrage” before starting the campaign.

      Whatever you do, they will always find something to be offended with.

      1. YetAnotherLocksmith Silver badge

        Re: Who has been training the AI?

        Absolutely. They, the savage Woke, picked images of non-white people like President Obama and women, who clearly should never be mentioned. Only the old white guys and boobs on your timeline, right?

  3. Elledan
    IT Angle

    It's usually incompetence

    To paraphrase House, MD: It's never Evil, unless it is.

    In all the fury of the past weeks about this issue, there have been countless accusations hurled at Twitter, about how this was a willful choice by White Dudes or something over at Twitter HQ, done in full knowledge of the consequences of this algorithm. None of that really adds up, though.

    First of all, Twitter's red-headed stepchild (TweetDeck) doesn't have any of these issues, it uses a simple centering algorithm. In the case of all the 'damning evidence' tweets people have been sending around lately, all I could see on those was the white area in the center of those images. In the few cases that I am forced to use the burning trash fire that is the UI at twitter.com or the toxic waste spill that's the Android Twitter app, I'm always impressed by how much 'fancy' stuff they try to cram into both.

    The sorting, filtering and tenderising of one's timeline (even when supposedly disabling this behaviour), the countless sponsored tweets and the countless UI and UX glitches that were the primary reason why I only use TweetDeck. In its barren, neglected simplicity, TweetDeck only does the bare minimum.

    That all leads me to believe that the problem with Twitter is that as a company that's barely scraping by as a money-losing enterprise is not a beacon of technical elegance or competence, but more one of 'look at the shinies' while praying that they can draw in enough advertising revenue that month to keep the lights on at Twitter HQ. I doubt Twitter even has the budget to invest in Evil Schemes at this point.

    Tl;dr: Shiny feature, not really tested, backfires nicely. Just another corporate day at Incompetence'R'Us :)

    1. jospanner

      Re: It's usually incompetence

      So you think it's incompetence, but not like, racist incompetence?

      What?

      The very fact that no one thought to check if this thing worked properly with people of all different colours is itself a problem. The developer for this thing must only have tested it on a small set of pictures, and then decided that it worked good enough. That is itself racial bias - they didn't think to check to see if it worked on people who didn't look like themselves or their limited data set.

      1. YetAnotherLocksmith Silver badge

        Re: It's usually incompetence

        Absolutely.

        That not one dev tried it on a picture of a black guy and realised it didn't work, or on a single women, and realised that, just perhaps, zooming on their boobs wasn't wise? Staggering really. "Just send it. We only few billion users, they'll not notice."

  4. Tessier-Ashpool

    This AI stuff is starting to get me down. I was recently ‘moderated’ on a popular forum by venturing that a prolific poster there (who has a hot profile pic focusing on the chest area) is in fact an AI bot. The evidence: rapid submission of completely bland anodyne comments on each and every article. And a refusal to engage in logical debate.

    Am I being paranoid? Surely this kind of thing is possible? The AI is probably having a good laugh that I was moderated.

    1. Martin an gof Silver badge
      Black Helicopters

      Well given your onscreen name, are you yourself Wintermute or Neuromancer?

      M.

  5. Nuno trancoso

    Why... even... bother...

    Don't get me wrong, I love ML. But the one thing you can be sure of is that while it will do a great job for 99.5% of people and fail miserably for the rest (personal preferences et all...). And that's assuming decent training data (that they obviously didn't use...).

    And why reinvent the wheel... The simple crop has been around since like forever. If post-crop you are REALLY close to the original aspect ratio, a simple resize will do, if close but not that close (yet not that far), seam carving will do the trick. If too far from the original AR, just PAD IT, nothing else you can do...

  6. jospanner

    I remember a similar article being full of developers claiming that computers can't be racist, and writing off the idea that developer's own racial biases could possibly go into the software.

    I also remember this being the point that I became officially terrified at the prospect of mass deployment of AI.

    Your biases go into your software, or indeed anything you do. There are no two ways about this. You can either understand this and try to mitigate it, or you can keep making the same mistakes while protecting your fragile ego.

  7. JDPower Bronze badge
    Facepalm

    Come on Reg. For once a company correctly refers to it as machine learning, and you still call it AI.

  8. Anonymous Coward
    Anonymous Coward

    AI

    It may be A but it isn't I.

  9. YetAnotherLocksmith Silver badge

    Having seen dozens of examples of how it always picks the white guy in the scene over anyone else, regardless of location, and numerous other examples of how bad it is when there's two things on the image, I don't think twitter can get to say "they studied it & found it was fine" when all the evidence in their own platform shows it isn't.

  10. Dr Scrum Master

    What AI?

    It's not AI. It's not intelligent. It knows nothing about what it's looking at. It's statistics based on zero understanding of what's being presented to it.

    Lighter faces show greater contrast to what's around them than darker faces. So lighter faces will probably be more noticeable to a dumb statistical model that knows nothing about its subject.

    The last time I saw facial recognition the service was pretty certain that the subject was wearing glasses. He wasn't. He had more shadows around his eyes because of the overhead lighting. A dumb statistical model would say there's more dark in that area of the image, so it's probably glasses. A human would see shadow on a human face and no glasses. Just like a human would be able to recognise that Elton John's facial adornments were glasses if it had never seen anything like them before.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like