What Google’s firing of researcher Timnit Gebru means for AI ethics



[ad_1]

Google triggered an uproar earlier this month when it fired Timnit Gebru, the co-leader of a team of company researchers studying the ethical implications of artificial intelligence. Google claims it accepted his “resignation,” but Gebru, who is Black, claims he was fired for drawing attention to the lack of diversity in Google’s workforce. She had also disagreed with her supervisors because they asked her to retract an article she co-authored on ethical issues associated with certain types of artificial intelligence models that are critical to Google’s business.

On this week’s Trend Lines podcast, WPR’s Elliot Waldman joined Karen Hao, senior AI reporter for MIT Technology Review, to discuss Gebru’s ouster and its implications for the increasingly important field of business ethics. AI.

Listen to the full interview with Karen Hao on the Trend Lines podcast:

If you like what you hear, subscribe to Trend Lines:
Google Podcasts Badge Apple Podcasts Badge Spotify Podcasts Badge

The following is a partial transcript of the interview. It has been lightly edited for clarity.

World Politics Magazine: First, could you tell us a little about Gebru and the kind of stature he is in the field of artificial intelligence, given the groundbreaking research he has done and how he ended up on Google to begin with?

Karen Hao: Timnit Gebru, you could say, is one of the cornerstones of the field of AI ethics. He got his Ph.D. in AI ethics at Stanford under the advice of Fei-Fei Li, who is one of the pioneers of the entire field of AI. When Timnit completed her Ph.D. at Stanford, she joined Microsoft for a post-doctorate, before ending up at Google after she was approached based on the impressive work she had done. Google was starting its ethical AI team and they thought I’d be a great person to co-lead it. One of the studies she is known for is one she co-authored with another black researcher, Joy Buolamwini, on algorithmic discrimination that appears in commercial facial recognition systems.

The document was published in 2018, and at the time, the revelations were quite shocking, because they were auditing commercial facial recognition systems that were already being sold by tech giants. The findings in the paper showed that these systems that were being sold on the premise that they were highly accurate were actually extremely inaccurate, specifically on female and darker-skinned faces. In the two years since the article was published, there have been a series of events that have ultimately led these tech giants to withdraw or suspend the sale of their facial recognition products to the police. The seed for those actions was actually planted by the article that Timnit co-authored. So he has a great presence in the field of AI ethics and he has done some great innovative work. He also co-founded a nonprofit called Black in AI that really stands up for diversity in technology and in AI specifically. She is a force of nature and a household name in space.

We should think about how to develop new artificial intelligence systems that do not rely on this brute force method to extract billions of sentences from the Internet.

IN PR: What exactly are the ethical issues that Gebru and his co-authors identified in the article that led to their dismissal?

Hao: The paper was about the risks of large-scale language models, which are essentially AI algorithms that are trained on a huge amount of text. So you can imagine that they are trained on all the articles that have been published on the internet, all the subreddits, Reddit threads, Twitter and Instagram captions, everything. And they are trying to learn how we construct English sentences and how they could generate English sentences. One of the reasons that Google is very interested in this technology is because it helps to boost their search engine. For Google to provide you with relevant results when you search for a query, it must be able to capture or interpret the context of what you are saying, so that if you type three words at random, it can gather the intent of what you are saying. ‘What are you looking for.

What Timnit and his co-authors point out in this article is that this relatively recent area of ​​research is beneficial, but it also has some rather significant drawbacks that need to be discussed further. One of them is that these models require a huge amount of electricity to run because they work in really big data centers. And given the fact that we are in a global climate crisis, the field should think about the fact that by doing this research, it could exacerbate climate change and then have downstream effects that disproportionately impact marginalized communities and countries in development. Another risk they point to is the fact that these models are so large that they are very difficult to scrutinize, and they also capture large tracts of the Internet that are highly toxic.

So they end up normalizing a lot of sexist, racist, or abusive language that we don’t want to perpetuate in the future. But because of the lack of scrutiny in these models, we can’t fully analyze the kinds of things they’re learning and then discard them. Ultimately, the conclusion of the document is that these systems have great benefits, but also great risks. And as a field, we should spend more time thinking about how we can develop new language artificial intelligence systems that don’t rely so much on this brute force method of simply training it with billions of sentences drawn from the internet.

IN PR: And how did Gebru’s supervisors at Google react to that?

Hao: The interesting thing is that Timnit has said, and this has been endorsed by his former teammates, that the document was approved for presentation to a conference. This is a very classic process for your team and within the broader Google AI research team. The goal of doing this research is to contribute to academic discourse, and the best way to do this is to send it to an academic conference. They had prepared this document with some external contributors and submitted it to one of the major AI ethics conferences for next year. It had been approved by his manager and other people, but then at the last minute, he received a notification from superiors above his manager saying that he had to retract the document.

Very little was revealed to him about why he needed to retract the role. He then proceeded to ask a lot of questions about who was telling him to retract the document, why they were asking him to retract it, and if modifications could be made to make it more palatable for presentation. She continued to be blocked and received no further clarification, so she ended up emailing just before she went on Thanksgiving vacation saying she wasn’t going to retract the document unless certain conditions were met first.

Silicon Valley has a conception of how the world works based on the disproportionate representation of a particular subset of the world. That is, generally upper-class heterosexual white men.

He asked who was giving the comments and what the comments were. He also requested meetings with more senior executives to explain what happened. The way their research had been treated was extremely disrespectful and was not the way researchers were traditionally treated at Google. I wanted an explanation of why they had done that. And if they didn’t meet those conditions, then I would have a frank conversation with them about a last appointment at Google, so that I could create a transition plan, leave the company smoothly, and publish the document outside the context of Google. He then went on vacation, and in between, one of his direct reports texted him to say that they had received an email saying that Google had accepted his resignation.

IN PR: In terms of the issues that Gebru and his co-authors raise in their paper, what does it mean for the field of AI ethics to have what appears to be a massive level of moral hazard, where communities that are most at risk from shocks that Gebru and his identified co-authors (the environmental ramifications and so forth) are marginalized and often voiceless in the tech space, while the engineers who build those AI models are largely insulated from risk.

Hao: I think this is at the center of what has been an ongoing discussion within this community for the past few years, which is that Silicon Valley has a conception of how the world works based on the disproportionate representation of a particular subset of the world. That is, generally upper-class heterosexual white men. The values ​​that they hold from their cross section of lived experience have now somehow become the values ​​that everyone should live by. But it doesn’t always work that way.

They do this cost-benefit analysis that it’s worth creating these very large language models and it’s worth spending all that money and electricity to reap the benefits of that kind of research. But it is based on your values ​​and your lived experience, and it might not end up being the same cost-benefit analysis that someone might do in a developing country where they would rather not have to deal with the repercussions of climate change later on. That was one of the reasons Timnit was so adamant about making sure there is more diversity at the decision-making table. If you have more people who have different lived experiences who can then analyze the impact of these technologies through their lenses and bring their voices into the conversation, then perhaps we would have more technologies that do not skew their benefits so much towards one group at the expense. of others.

Editor’s Note: The top photo is available on the CC BY 2.0 license.

[ad_2]