Should have asked the TV advertisers
The saliency algorithm employed by Twitter uses machine learning to crop images around the first spot eyes most frequently fall. In fall of 2020, some users complained the image cropping favoured light skin over dark, and women's legs and breasts over their faces.
Everything new is old. TV advertisers have known for decades where eyes most frequently fall in an image, using that knowledge to place their product accordingly. Clue... when the image contains people, often where viewers first look is at features of sexual and/or physical attractiveness.
Twitter has simply re-discovered basic human nature. It's not an algorithmic bias, they just chose the wrong algorithm to detect where to crop around. Should've tried to locate faces, then go from there. Which, as we've seen over and over, has other problems if using AI for detection.