"What on earth are they doing?”: AI ethics experts react to Google doubling embattled ethics team

Without making big changes, “There is no way for Google to retain its research credibility, especially in ethical AI,” one Google employee told us.
article cover

Francis Scialabba

· 5 min read

In the coming years, Google plans to double its AI ethics research staff to 200 members and boost the group’s operating budget. Marian Croak, the AI ethics division’s current lead, made the announcement during a speech at the Wall Street Journal’s Future of Everything Event this Tuesday, but didn’t offer a timeline or specifics on funding.

“We don't have more details to share on funding,” a Google representative told us, followed by a list of the company’s overall responsible AI efforts.

The news comes after five months of internal turmoil, sparked by Google firing both co-leaders of its AI ethics division: first Timnit Gebru, then Margaret Mitchell months later. The firings followed a dispute over their research paper on the dangers of large language models—the powerful AI technique that underlies some of Google’s key products and services, including Search.

Since then, staffers have resigned, candidates have turned down interviews, and a leading AI ethics conference has suspended Google’s sponsorship. In a joint statement Monday, three leading groups that are focused on diverse representation in AI—Queer in AI, Widening NLP, and Black in AI (which was founded by Gebru)—said they would end their “sponsorship relationship with Google.”

“Being responsible in the way that you develop and deploy AI technology is fundamental to the good of the business,” Croak said during her talk at the WSJ event. “It severely damages the brand if things aren’t done in an ethical way.”

The AI ethics group’s restructuring led to Croak’s appointment as leader, but her past work on patents for surveillance technology has sparked criticism. Croak also reports directly to Jeff Dean, head of Google AI, whose decisions kicked off the controversial string of events in the first place.

Pulse check

So how did the AI ethics community react to Google’s Tuesday announcement? The news got some “...” reactions on Twitter, with one team member saying it was news even to her.

“My reaction was: What on earth are they doing?” Meredith Whittaker, cofounder of the AI Now Institute and one of the organizers of the 2018 Google Walkouts, told us. “It sounds like a very desperate kind of cover-your-ass move from a company that is really unsure about how to continue to do business as usual in the face of a growing outcry around the harms and implications of their technologies.”

Stay up to date on emerging tech

Drones, automation, AI, and more. The technologies that will shape the future of business, all in one newsletter.

After all, she said, the company is “promising to double the number of staff of a group that [it] just very publicly decimated.”

To that end, an influx of hiring without overhauling Google’s approach to AI ethics could lead to ethics-washing—and pursuing a very corporate, limited definition of ethics. “They’re looking for people who will be willing to accept a title of AI ethics researcher without asking questions that might perturb the company’s bottom line,” Whittaker said. “They want to look ethical without actually changing their business practices.”

When asked whether it was prioritizing appearances over changing business practices, a Google representative pointed to part of an earlier statement: “This work is incredibly important—we've recently pulled together 10 teams from across Google focused on responsible AI and related research under Marian Croak to strengthen and continue expanding this work.”

Tuesday’s news also led to increased concern internally among Google’s own AI ethics team.

"Google executives have demonstrated that they value control and loyalty over making substantive contributions to the ethics of AI, and that they will block or outright fire those they deem threatening,” Dylan Baker, a software engineer working on ethical AI at Google, told us. “Adding additional headcount under these conditions is nothing but posturing. As long as these motivations and behaviors remain unaddressed, there is no way for Google to retain its research credibility, especially in ethical AI."

Looking ahead

Whittaker said she hopes to see more collective organizing by researchers alongside the communities most at risk for AI’s downstream harms—as well as infrastructure to support whistleblowers and a growing worker movement that sets ethical standards for which technology should, and should not, be built.

Since Gebru’s termination, Google Walkout for Real Change has published a letter calling for research integrity signed by thousands of Google employees and members of the tech community, as well as a call for stronger whistleblower protections. And in January, the Alphabet Workers Union officially formed, although organizing efforts predated the terminations.

There’s also a growing awareness in the research space that more questions, and boundaries, are necessary, Whittaker said: “We need to look under the hood and ask who’s funding this, what are the recommendations being made, [and] is this using the term ‘ethics’ as a buzzword without offering a meaningful check on the power of these specific companies.”

Stay up to date on emerging tech

Drones, automation, AI, and more. The technologies that will shape the future of business, all in one newsletter.