Are algorithms racist? Often asked, this question has been in the news again since this weekend. It all started with a tweet in which a user wonders about the choice that the social network would make between Mitch McConnell, the Republican President of the Senate and Barack Obama, 44th President of the United States. Each time, two images with the two men are published. One where Barack Obama appears at the bottom, another where Barack Obama appears at the top. The center of the image is empty each time, so Twitter doesn’t just select the content in the middle (you are encouraged to open the tweet to understand exactly the content of the experience).
The result is terrible, each time Mitch McConnell is recognized by the algorithm and appears in the preview of the tweet. Barack Obama only appears when you reverse the colors or add Mitch McConnell’s glasses to him.
We tested for bias before shipping the model & didn’t find evidence of racial or gender bias in our testing. But it’s clear that we’ve got more analysis to do. We’ll continue to share what we learn, what actions we take, & will open source it so others can review and replicate.
– Twitter Comms (@TwitterComms) September 20, 2020
Twitter refutes the trail of racism
Through its account dedicated to communication, Twitter says it has itself tested these comparisons and cannot conclude that the color of the skin is at the origin of the behavior of its algorithm. However, the social network promises new investigations on the subject and says it will be transparent by publishing the results of its studies. Other users had fun posting other reviews and found, for example, that Twitter’s algorithm preferred white Michael Jackson to the Jackson Five’s Michael Jackson. Between a man and a woman, the man is also often privileged. Finally, between a white woman and a black woman, it is once again the black person who wins …
How to explain such behavior? Without being able to conclude anything from it, it is not our role, we can easily imagine that it is a question of bias in the training of algorithms. If the Twitter engine has learned to recognize millions of white men and only thousands of people of color or women, then it will naturally turn to a white man when it sees one. The tech industry has often been a victim of these problems, especially with facial recognition. It is up to it to be more inclusive in its developments to represent the entire human population.
Users of a third-party Twitter client like Tweetbot or Tweetdeck are spared from these behaviors. Indeed, these software do not use an algorithm and only display the center of the image. What if the solution to the problem is there? Should we really ask a machine to choose between two human beings?