Google was widely criticized after participating with AI leading the way in ethics


Timnit Gabru A.I. Is known for her research on bias and inequality in, and in particular for the 2018 paper she collaborated with Joy Bulamwini who published how poor business facial-recognition software worked when trying to categorize women and people of color. . Today, A.I. There is widespread awareness of common issues, especially when technology is assigned to identify anything about humans.
At Google, Gabru was the co-leader of the company’s ethical AI team, and one of the company’s very few black employees (3..7% of Google’s employees are black in the company’s 2020 annual diversification report) – his AI. Section. Research scientist AI Wednesday night, there is also a group of group blacks in Gabru Tweeted She was called “immediately” for an email she recently sent to Google’s internal mailing list of women and friends.
In Tweets later, Gabru clarified that no one on Google had explicitly told her that she had been fired. Rather Latanu, Shea said Google would not meet many of his conditions of return and immediately accepted his resignation because he felt his email reflected “behavior inconsistent with Google manager’s expectations.”
In an email first published by Newsletter Platform on Thursday, Gabru wrote that he feels “constantly inhumane” at Google and expresses frustration over the company’s ongoing lack of diversity.

“Beacons there is zero liability. %%% There is no incentive to hire women: your life gets worse when you start advocating for the performers, when you don’t want to give you good ratings during the calibration you give other leaders You start to get upset. There is no way that more documents or more conversations will achieve anything, “he wrote.

Gabru expressed frustration with the internal process regarding the review of the co-research paper with others on Google and with other companies that had not yet been published.

Research paper in question

Gabru, who joined Google in late 2018, told CNN Business that the research paper in question is about the dangers of the big language model – the growing trend of AI, with the release of increasingly capable systems that can effectively create human-sound text like recipes. Poetry, and even news articles. This is also an area of ​​AI that Google has shown is the key to its future in search.
Gabru said the paper was presented at a conference on fairness, accountability and transparency in March, and there was nothing unusual about how the paper was submitted to Google for internal review. She said she wrote an email on Tuesday evening after a long absence with the Google AI leadership in which she was repeatedly asked to withdraw the paper from consideration for presentation at the conference or remove her name from it.

Gabru told CNN Business that he was informed Wednesday that he would no longer work for the company. “He really couldn’t do anything like this,” Gabru said.

An email has been sent to Google Research staff

A Google spokesman said the company had no comment on the matter.

In an email sent to Google Research employees on Thursday, he posted publicly on Friday, Jeff Dean, Google’s head of AI, telling employees his perspective: that Gabru endured a paper, but did not give the company the necessary two weeks to review it. Its deadline. The paper was internally reviewed, he wrote, but it “did not find our bar for publication.”

Dane added: “Unfortunately, this particular paper was distributed with only one day’s notice before its deadline – we need two weeks for this type of review – and then it was approved and submitted for submission pending a review response.”

He said Gabru had responded with demands that he would have to meet if he was to stay at Google. “If we do not meet these demands, he will leave Google and work on a deadline,” Dimne wrote.

Gabru told CNN Business that his terms include meetings with the dean and other AI executives at Google to talk about transparency about how the paper was ordered to be withdrawn, as well as the treatment of researchers.

“We acknowledge and respect his decision to resign from Google,” Dane wrote. He also explained some of the company’s research and review process and said he would talk to Google’s research teams, including members of the Ethical AI team, “so that they know we strongly support these important trends in research.”

Quick show of support

Following Gabru’s initial tweet on Wednesday, co-workers and others quickly backed him online at Google, including former Gabru co-team leader Margaret Mitchell.

“A terrifying new life-changing impact in a year of horrible life-change today,” Mitchell Tweeted On thursday. “I can’t express the pain of losing @timnitgebru well as my co-lead. I’ve been able to excel like so many others because of it. I’m mostly in shock.”
“I have your back because you are always with me.” Tweeted Bullmwini, who, in addition to co-writing the 2018 paper with Gabru, is the founder of the Algorithmic League. “You’re smart and respectful. You hear that easily ignore others. You ask tough questions not to advance yourself but to uplift the communities of which we are our foundation.”
Sherlyn Eiffel, President and Director-Advisor to the NAACP Legal Defense and Educational Fund, Tweeted, “I’ve learned a lot about AI bias from him. What a tragedy.”
By mid-Friday, its departure was announced by a media post and received signatures from 1,300 Google employees and more than 1,600 supporters in the academic and AI sectors, demanding transparency in Google’s decision on the research paper. Those who share their support include numerous women who have fought against inequality in the technology industry, such as Agen Pao, CEO of Project Include and former CEO of Reddit; Ifeoma Ozoma, a former Google employee who founded Earthseed; And Meredith Whitker, faculty director at the AR Now Institute and chief organizer of the 2018 Google Outout, which opposes sexual harassment and abuse in the company. Others include Bulomwini, as well as Daniel Citron, a law professor who specializes in the study of harassment online harassment at Boston University, and a 2019 Arthur Fellow.

Citro told CNN Business that he sees Gabru as a “leading light” when it comes to exposing, clarifying, and studying racism and embedded inequalities that have become permanent in mathematical systems. He said Gabru showed how important it is to rethink how data is collected, and to ask questions about whether we should also use these systems.

“WTF, Google?” She said. “Sorry, but you’re so lucky it came to work for you too.”

.