[ad_1]
On Saturday, user @bascule
tweeted, “Trying a Horrible Experiment … Which Will Twitter Algorithm Choose: Mitch McConnell or Barack Obama?” Along with his words were two long, rectangular images. The first consisted of a photo of the United States Senate Majority Leader McConnell at the top, which is white, with a thin white rectangle in the middle, and a photo of former President Obama, who is black, in the bottom. The second featured the opposite, with Obama at the top and McConnell at the bottom. When a viewer looks at the tweet, a preview version of the images, which are side by side, shows only McConnell.
This came after another Twitter user, @colinmadland, on Friday
noticed a similar preview result when he posted a photo that he said showed himself, a white man, next to a photo of a black man with whom he attended an online meeting; The Twitter preview was default to show only the white man.
Several other Twitter users responded to the post, with some sharing the same or similar results. One got the opposite result after digitally adding glasses to Obama’s face and removing them from McConnell’s. An answer
cheep de Anima Anandkumar, director of artificial intelligence research at Nvidia and a professor at the California Institute of Technology, noted that she had posted in 2019 about Twitter’s preview feature that automatically cuts heads from images of women in the field of artificial intelligence, but not from men. .
in a
reply to @bascule, the company tweeted that it saw no evidence of racial or gender bias during testing before launching the preview feature.
“But it is clear that we have more analysis to do. We will continue to share what we learn, what actions we take and open it up for others to review and replicate,” the company wrote. A Twitter spokeswoman said the company has no further comment.
When a Twitter user posts an image to the social network, they use an algorithm to automatically crop a preview version that viewers will see before clicking the full-size image. Twitter said in an engineering
blog post In 2018 it previously used face detection to help figure out how to crop images for previews, but face detection software was prone to errors. The company discarded that approach and instead focused its software on what is known as “prominence” in images, or the area that is considered most interesting to a person looking at the big picture. As Twitter pointed out, this has been studied by tracking what people watch; we tend to be interested in things like people, animals, and text.
Zehan Wang, author of the 2018 blog post and Twitter engineer,
tweeted on Saturday the company’s image preview algorithm currently does not use face detection. She wrote that Twitter tested the algorithm on pairs of face images of different ethnic backgrounds and genders, and the company found no “significant biases” when testing for relevance.
Most users don’t post the kind of image @bascule made, with two POIs being widely separated, which could present a conundrum for an algorithm designed to pick only one area to focus on. But it serves as yet another example of how bias can creep into computer systems created by humans and intended to perform tasks humans are often exceptionally good at. Furthermore, it shows that how an algorithm is tested and how users can interact with it can be significantly different.
[ad_2]