Google has added a layer of scrutiny to research papers on sensitive topics, including race, gender and political ideology. A senior manager also instructed researchers to “strike a positive tone” in the paper this summer. The news was previously reported by Reuters.
“Advances in technology and the increasing complexity of our external environment are increasingly leading to circumstances where potentially inadvisable projects raise ethical, reputable, regulatory or legal issues,” the policy states. Three employees told Reuters the rule began in June.
The company has told employees on several occasions to “avoid using its technology in a negative light,” Reuters says.
According to Reuters, the recommendation used to personalize content on platforms such as YouTube is A.I. Employees working on the paper were told to “be very careful to strike a positive tone.” The authors then updated the paper to “remove all references to Google products.”
In another paper on using AI to understand foreign languages, “Google translates the context of how the product is making mistakes,” Reuters wrote. The change came in response to a reviewer’s request.
Google’s standard review process is to ensure that researchers do not inadvertently reveal trade secrets. But the “sensitive topics” review goes beyond that. Employees wishing to evaluate Google’s own services for parties are asked to consult with legal, PR and policy teams first. Other sensitive topics allegedly include China, the oil industry, location data, religion and Israel.
The search giant’s publishing process has been under discussion since the shooting of AI atheist Timnit Gabru in early December. Gabru says she was terminated in an email sent to an internal group for Google AI research staff, Google Brain Women and Fellow Listed. In it, she said of Google’s managers that they are pushing the paper back on the dangers of large-scale language processing models. Jeff Dean, head of Google’s AI, said he had submitted it near the deadline. But Gabru’s own team pressed the statements, saying the policy was “unequal and discriminatory.”
Gabru reached out to Google’s PR and policy team in September over the paper W. Washington Post. She knew that the company could raise issues with some aspects of research, as it uses large language processing models in its search engines. The deadline to change the paper was not until the end of January 2021, giving researchers enough time to respond to any concerns.
A week before Thanksgiving, however, Megan Kacholiya, VP of Google Research, asked Gabru to withdraw the paper. The following month, Gabru was fired.
Google did not immediately respond to a request for comment Edge.