Google Asked AI Researchers To Strike Positive Tone – Report

Internet censorship © Matthias Pahl Shutterstock 2012

Scientific papers generated internally at the search engine giant are now subjected to “sensitive topics” review, it has been reported

The internal workings of the AI scientific community within Google has shed light the level of control the firm tries to exert over research papers.

Reuters reported that Google allegedly launched a “sensitive topics” review this year over scientists’ papers, and in at least three cases requested authors refrain from casting its technology in a negative light.

This is apparently what internal communications and interviews with researchers involved in the work have shown.

google

Strict control

Google’s new review procedure reportedly asks researchers to consult with legal, policy and public relations teams before pursuing topics such as face and sentiment analysis and categorisations of race, gender or political affiliation, according to an internal webpages explaining the policy.

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” one of the pages for research staff stated to Reuters.

And it seems for some projects, Google officials have intervened in later stages.

A senior Google manager reviewing a study on content recommendation technology shortly before publication this summer told authors to “take great care to strike a positive tone,” according to internal correspondence shown to Reuters.

The “sensitive topics” process adds a round of scrutiny to Google’s standard review of papers for pitfalls such as disclosing of trade secrets, eight current and former employees said.

Google reportedly declined to comment for this story.

Dr Gebru

But it should be remembered that Google is currently at the centre of a row concerning the dismissal of Dr Timnit Gebru, who is noted for her work on algorithmic bias, particularly with facial recognition technology.

Earlier this month Dr Gebru claimed that Google had effectively fired her over an email she sent to colleagues, but Google disputes this and insisted she didn’t follow procedure.

The issue began when Gebru alleged that Google was trying to get her to retract a research paper, because she had co-authored a paper with four other Googlers as well as some external collaborators that needed to go through Google’s review process.

Google however alleged that this particular paper was only shared with a day’s notice before its deadline, and that it normally requires two weeks to conduct a review.

But instead of awaiting reviewer feedback, the paper was approved for submission and submitted.

Gebru then emailed a Google unit (the Brain Women and Allies listserv), in which she voiced frustration that managers were trying to get her to retract the research paper.

After the email went out, Gebru reportedly told managers that certain conditions had to be met in order for her to stay at the company.

Otherwise, she would have to work on a transition plan.

But Google called her bluff, and a senior Google official (Megan Kacholia) at Google Research, said she could not meet Gebru’s conditions and accepted her resignation as a result.

Alphabet and Google CEO Sundar Pichai told staff that the firm would examine her departure, but staffers on Google’s Ethical AI research team apparently demanded the company sideline vice president Megan Kacholia, as well as commit to greater academic freedom.

It is unclear at the time of writing whether Dr Gebru’s paper underwent a “sensitive topics” review.