‘Horrible Experiment’ Appears to Show Twitter Snipping Tool Is Racially Biased | Science and technology news



[ad_1]

Twitter has launched an investigation after users claimed that its image cropping feature favors the faces of white people.

An automatic tool in the social network’s mobile app automatically crops images that are too large to fit on the screen and selects which parts of an image should be cut.

But an experiment by a graduate programmer appeared to show racial prejudice.

Twitter has introduced voice tweets
Image:
Twitter has pledged to investigate

To see what the Twitter algorithm would choose, Tony Arcieri posted a long image with headshots of Republican Senate Leader Mitch McConnell at the top and former US President Barack Obama at the bottom, separated by spaces in White.

In a second image, Obama’s head shot was placed at the top and McConnell’s at the bottom.

Both times, the former president was eliminated entirely.

Following the “horrible experiment”, which occurred after an image he posted cropped a black colleague, Arcieri wrote: “Twitter is just one example of the racism that manifests itself in machine learning algorithms.”

At the time of this writing, his experiment has been retweeted 78,000 times.

Twitter has promised to investigate the issue, but said in a statement: “Our team ran bias tests before shipping the model and found no evidence of racial or gender bias in our tests.

“From these examples it is clear that we have more analysis to do. We will continue to share what we learn, what actions we take, and open our analysis for others to review and replicate.”

A Twitter representative also pointed to research by a Carnegie Mellon University scientist who analyzed 92 images. In that experiment, the algorithm favored black faces 52 times.

In 2018, the company said the tool was based on a “neural network” that uses artificial intelligence to predict which part of a photo would be interesting to a user.

Meredith Whittaker, co-founder of the AI ​​Now Institute, told the Thomson Reuters Foundation: “This is another in a long and weary litany of examples showing automated systems encoding racism, misogyny and stories of discrimination.”

[ad_2]