
Twitter is investigating why image previews favor white faces over black faces on its platform, as users have discovered a problem with Twitter’s algorithms that cuts off image previews. Twitter is also investigating why the algorithms it uses to create image previews choose the problem, as the problem appears to show the faces of white people more frequently than black faces.
Trying a horrible experiment…
Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama? pic.twitter.com/bR1GRyCkia— Tony “Abolish (Pol)ICE” Arcieri ???? (@bascule) September 19, 2020
Several Twitter users demonstrated the issue over the weekend, posting examples of posts that had a black person’s face and a white person’s face as Twitter’s preview showed white faces more often.
The informal test began after a Twitter user tried to post about an issue he noticed with Zoom’s facial recognition feature, which was not showing the face of a Black colleague on calls.
A faculty member has been asking how to stop Zoom from removing his head when he uses a virtual background. We suggested the usual plain background, good lighting etc, but it didn’t work. I was in a meeting with him today when I realized why it was happening. — Colin Madland (@colinmadland) September 19, 2020
When he posted about his problem on Twitter, he noted that he too preferred his white face to his black colleague’s face.
Users discovered that the preview algorithm picked up non-black cartoon characters as well.
I wonder if Twitter does this to fictional characters too.
Lenny Carl pic.twitter.com/fmJMWkkYEf— Jordan Simonovski (@_jsimonovski) September 20, 2020
When Twitter first started using algorithms to automatically crop previews of images, machine learning researchers explained in a post how they started with facial recognition to crop images, but found it lacking, mainly because not all images contained faces.
In a clarification from Twitter, face detection algorithms often miss faces and sometimes mistakenly detect faces when there are no faces. If no faces are found, the view will center on the center of the image which can result in preview images being awkwardly cropped.
Twitter’s chief design officer, Dantley Davis, tweeted that the company was investigating its algorithm, conducting some unscientific experiments on images: Liz Kelly of Twitter’s communications team tweeted on Sunday that the company tested for bias but found no evidence of bias or racism.
Here’s another example of what I’ve experimented with. It’s not a scientific test as it’s an isolated example, but it points to some variables that we need to look into. Both men now have the same suits and I cover their hands. We’re still investigating the NN. pic.twitter.com/06BhFgDkyA— Dantley (@dantley) September 20, 2020
“Clearly we have more analysis to do,” Kelly wrote in a tweet. “We will open source our work so others can review and iterate.”
Twitter’s chief technology officer, Parag Agrawal, tweeted that the model needs “continuous improvement,” adding that he is “eager to learn” from the experiments.
This is a very important question. To address it, we did analysis on our model when we shipped it, but it needs continuous improvement.
Love this public, open, and rigorous testing — and eager to learn from this. https://t.co/E8Y71qSLXa— Parag Agrawal (@paraga) September 20, 2020








