[ad_1]
The web can be a wonderful place to bring nerdy technical discussions to a wider audience. This happened again in recent days, when the extent to which Twitter’s algorithm for image cropping discriminates against people with dark skin color was debated and, if so, what technical details might be the reason.
The debate started with PhD student Colin Madlan, who actually wanted to draw attention to a problem with Zoom: the video conferencing service had repeated the Head of a hidden black colleagueif the man used one of Zoom’s default backgrounds.
When Madlan wanted to tell his Twitter followers and uploaded a screenshot of a Zoom call with him and his colleague, he realized there could be a problem on Twitter too. So Twitter automatically cuts your screenshot thumbnails this waythat only his face was visible and not that of his black colleague.
With automatic image cropping, Twitter really wants to make sure that the most relevant or meaningful part of the image shows up in the preview. In the case of Madlan’s experiment, not only was the cut not significant, but it discriminated against her black colleague.
Programmer Tony Arcieri used Madlan’s tweet as an opportunity to test Twitter’s algorithms: he uploaded image files to Twitter showing American politicians Barack Obama and Mitch McConnell among them. Here, too, the result was clear: the algorithm that generated the preview image only showed McConnell, but masked Obama.
To rule out a preference for certain colors, Arcieri matched the color of the ties they wore in the photos and repeated the experiment. Once again, the algorithm chose McConell for the preview image. The fact that the Twitter algorithm prefers red ties cannot be the reason.
Twitter bosses announce new steps
A little later, several high-ranking Twitter employees spoke up and thanked them for the experiments. In addition to debating whether Twitter’s artificial intelligence prefers specific exposure settings and open or closed arms in photos, Twitter’s chief digital officer Dantley Davis wrote: “I’m in a position where I can solve the problem, and I will do it”.
However, Twitter has not yet been able to answer why the algorithm removed dark-skinned people from the preview images. Before the feature was introduced, no systematic racism would have been noted in the tests themselves, a spokeswoman said.
Even if the Arcieri and Madlan experiments were shared and discussed a thousand times, it should be clear that their examples only have anecdotal evidence and do not offer statistically reliable evidence for a racist algorithm. Yet at the same time, they dovetail with numerous scientific findings about how artificial intelligence and facial recognition harm people with dark skin tones.
In the case of Twitter’s image cropping algorithm, programmer Vinary Prabhu came across one Comparable test with 92 faces however, the result was that black faces were displayed more frequently in the preview images.
In any case, it makes sense that Twitter announced that it will investigate the problem more closely and improve the algorithm. Because even if scientists have been researching discrimination through algorithms for a long time, the topic is not just something for the academic niche, it can also result in serious personal and social injustices.
Strange digital world: explaining TikTok is like telling jokes
As a tech reporter, the increasingly absurd fight over the TikTok video app also occasionally has private consequences for me. Since the app became the subject of a global trade war, I found myself talking to less internet savvy family or acquaintances about what this TikTok is and how it works.
The conversation usually follows the same pattern: then I say things like that at some point it was mainly about dance videos, in which the words “Baby Shark” appeared very often, for example. Of course, this does not really increase the knowledge acquired by my interlocutor, and even if he begins to play the melody of the song “Baby Shark”, it does not change anything.
Then I explain, for example, that particularly creative videos and campaigns can be discovered again and again through the application, which has become an important platform for some communities, for example in the LGTBQ area, or that its aesthetics have long been that goes beyond the application and is also cultural. It has influence. All of this is not necessarily inclined to increase understanding of what is happening on TikTok among older acquaintances. Usually I just pull out my phone, open the app, and stop talking.
External links: three tips from other media
-
“WhatsApp & Co .: Researchers warn against mass reading of contacts” (3 minute reading time): Using so-called tracking attacks, TU Darmstadt researchers have managed to obtain information about users of messaging applications such as WhatsApp, Signal or Telegram spy. Personal messages could not be read, but the study shows that certain account information can be vulnerable even with encrypted messengers.
-
“Country of Liars” (53-minute podcast, English): The creators of the “Reply All” podcast know how to delve into remote worlds of the Internet while telling the human stories behind them. In the current episode, they get to the bottom of the QAnon phenomenon and try to figure out who might be behind the now-influential belief in conspiracies.
-
“Paper Maps, Two-Way Radios: How Firefighting Technology Is Stuck In The Past” (English, 5 min read) – The fire department fighting the fires raging in the western United States depends on partly outdated technology, even though it’s in California Silicon Valley some of the most advanced innovations in recent years have been invented. Some startups are now developing technology to better equip the fire service. Burned forests will not return with him.
I wish you a good week!
Max hoppenstedt
[ad_2]