Google fires artificial intelligence researcher, one of the few black women in the field



[ad_1]

Prominent artificial intelligence scholar Timnit Gebru helped improve Google's public image as a company that elevates black computer scientists and questions the harmful uses of artificial intelligence technology.

Kimberly White / Getty Images

Prominent artificial intelligence scholar Timnit Gebru helped improve Google’s public image as a company that elevates black computer scientists and questions the harmful uses of artificial intelligence technology.

Prominent artificial intelligence scholar Timnit Gebru helped improve Google’s public image as a company that elevates black computer scientists and questions the harmful uses of artificial intelligence technology.

But internally, Gebru, a leader in the field of AI ethics, had no qualms about voicing doubts about those commitments, until she was ousted from the company this week in a dispute over a research paper examining the social dangers of an emerging branch of AI.

Gebru announced on Twitter that she was fired. Google told employees that she resigned. More than 1,200 Google employees signed an open letter calling the incident “unprecedented investigative censorship” and blaming the company for racism and defensiveness.

The furore over Gebru’s abrupt departure is the latest incident that raises questions about whether Google has strayed so far from its original slogan of “Don’t be evil” that the company now routinely expels employees who dare to challenge management. The departure of Gebru, who is black, also raised more questions about diversity and inclusion in a company where black women make up only 1.6 percent of the workforce.

READ MORE:
* Apple steps up its efforts to replace the Google search engine
* Google’s antitrust case won’t change anything
* US government launches landmark antitrust case against Google
* Data Scientists – Weapon of Choice in the AI ​​Arms Race

And it has raised concerns beyond Google about whether flashy efforts in ethical AI, ranging from a White House executive order this week to established ethics review teams across the tech industry, are of little use when its conclusions can. threaten national profits or interests.

Google headquarters in Mountain View, California.

Marcio José Sánchez / AP

Google headquarters in Mountain View, California.

Gebru has been a star in the world of AI ethics who spent his early technology career working on Apple products and earned his Ph.D. studying computer vision at Stanford’s Artificial Intelligence Laboratory.

She is a co-founder of the Black in AI group, which promotes Black employment and leadership in the field. She is known for a landmark 2018 study that found racial and gender biases in facial recognition software.

Gebru had recently been working on a paper that examined the risks of developing computer systems that analyze huge databases of human language and use them to create their own human-like text. The document, a copy of which was shown to The Associated Press, mentions Google’s new technology used in its search business, as well as those developed by others.

In addition to pointing out the potential dangers of bias, the paper also cited the environmental cost of consuming so much energy to run the models, a major issue at a company that boasts of its commitment to being carbon neutral since 2007 while striving to be even more ecological.

Google managers were concerned about omissions at work and their time, and wanted the names of Google employees removed from the study, but Gebru objected, according to an email exchange shared with the AP and published for the first time. by Platformer.

Jeff Dean, Google’s head of artificial intelligence research, reiterated Google’s position on the study in a statement on Friday (local time).

The document made valid points, but “had some significant loopholes that prevented us from being comfortable with the Google affiliation,” wrote Dean.

“For example, it didn’t include important findings about how models can be made more efficient and really reduce overall environmental impact, and it didn’t take into account some recent work at Google and elsewhere on mitigating bias,” Dean added.

On Tuesday, Gebru voiced his frustrations about the process to an internal diversity and inclusion email group at Google, with the subject line: “Silencing marginalized voices in every way possible.” Gebru said on Twitter that’s the email that got her fired.

Dean, in an email to employees, said the company accepted “his decision to quit Google” because he told managers he would leave if his demands on the study were not met.

“Overthrowing Timnit for having the audacity to demand integrity of research seriously undermines Google’s credibility in supporting rigorous research on the ethics of AI and algorithmic auditing,” said Joy Buolamwini, a Massachusetts Institute of Technology graduate researcher and co-author of the 2018 facial recognition study with Gebru.

“She deserves more than Google knew how to give, and she is now a star free agent who will continue to transform the technology industry,” Buolamwini said in an email on Friday.

How Google will handle its AI ethics initiative and the internal dissent caused by Gebru’s departure is one of the problems facing the company heading into the new year.

At the same time it was leaving, the National Labor Relations Board on Wednesday put another spotlight on Google’s workplace. In a complaint, the NRLB accused the company of spying on employees during a 2019 effort to organize a union before the company fired two activist workers for engaging in activities permitted by US law. Google has denied the allegations in the case, which is scheduled for a hearing in April.

Google has also been listed as a for-profit thug by the US Department of Justice in an antitrust lawsuit alleging that the company has illegally abused the power of its dominant search engine and other popular digital services to stifle competition. The company also denies wrongdoing in that legal battle, which can drag on for years.

AP Technology writer Michael Liedtke in San Ramon, California, contributed to this report.



[ad_2]