Professionalism/Timnit Gebru and Ethical AI at Google

In late 2020, prominent Google AI researcher Timnit Gebru was abruptly fired after being pressured to retract a paper critical of language processing models used by the company. Her dismissal has sparked outrage among employees and industry professionals, ultimately leading to a restructuring of leadership within the Google Ethical AI team. The firing of Gebru brought attention to the issues of research censorship, oppressive work environments, and abuses of power in the technology industry.

Diversification efforts
Google AI is the artificial intelligence division of Google. The stated mission of the group is to build responsible systems that are accessible to all people. The AI principles that govern the work done by Google AI were adapted in 2020 to address systemic racial bias with the objective of "learning from a diverse set of voices internally and externally." Despite these efforts, Black employees make up less than 4% of the overall technical workforce at the company.

Ethical AI team
Google Brain is a research team within Google that focuses on machine intelligence, including deep learning and natural language processing. Within Google Brain, the Ethical AI team was created in 2017 by Margaret Mitchell, a senior research scientist working on aligning AI practices to human values and fairness. Gebru joined the team as a co-lead in September 2018, after persistent recruiting efforts by Mitchell and Jeff Dean, the head of the Google Brain research group.

Black in AI
Before joining the Ethical AI team, Gebru co-founded a non-profit organization to address the lack of diversity in artificial intelligence. The organization aspires to increase the presence of Black people in AI through academic programs, conferences, advocacy, and community. Gebru has heavily advocated for increased diversity in the researchers and developers working on AI systems as an essential component to developing unbiased machine learning systems.

Algorithmic bias
In 2018, Gebru co-authored a widely publicized research paper on the racial and gender bias of automated facial analysis systems. The study analyzed three commercially available gender classification products against subjects grouped by gender and skin type. The study found that all three companies had the lowest error rates for male subjects and the highest error rates for darker female subjects. Despite widespread speculation that AI-based systems did not produce uniform results across different populations, this research was one of the first studies to show empirical evidence of bias. Media coverage of the study was extensive, and Gebru became a prolific voice in the fight for fair algorithms.

Large language models
Around two years after joining the Ethical AI team at Google, Gebru was working to co-author a paper on the negative consequences of large language models (LMs). The paper, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" did not pass the internal review process at Google. The paper identified major issues with LMs to be environmental and financial cost, problematic training data, misdirected research efforts, and the amplification of bias. The researchers made multiple critical references to BERT, a machine learning technique for LMs used by Google to improve search results and language processing. The paper concludes by imploring researchers to carefully consider the risks of large LMs and re-evaluate the perceived necessity of technologies that mimic human behavior.

Paper retraction and Gebru's response
On November 19, 2020, Timnit Gebru was abruptly asked to come to an afternoon meeting, but was not informed of what the meeting would be about. At the meeting, then-Vice President of Google Research Megan Kacholia informed Gebru and her co-authors that their paper had not been approved for publication. They were mandated to retract the paper.

Gebru was angered by the decision, and emailed a listserv called Google Brain Women and Allies to voice her frustration with the experience and the environment at Google in general. In the email, Gebru states that she had to request feedback on the paper, and was only granted it in the form of a confidential document with anonymous authors. She says that when she tried to engage in a discussion about the feedback, she was ignored.

Gebru also implies that the retraction was not handled in alignment with typical procedures, and that she was treated poorly because of her race and gender. She was infuriated to not be involved in the discussion about whether to retract the paper, saying of herself that from Google’s perspective, “You are not worth having any conversations about this, since you are not someone whose humanity...is acknowledged or valued in this company.”

Gebru also discusses other instances of bias at Google, where Black women such as herself make up only 1.6% of the workforce. Gebru had previously stopped posting to Google Brain Women and Allies after “all the micro and macro aggressions and harassments [she] received after posting [her] stories.” She says that at Google “there is no incentive to hire 39% women: your life gets worse when you start advocating for underrepresented people.” Her writing reflects previous instances of retaliation against activists at Google, such as after The Google Walkouts of 2018.

Gebru's firing
After emailing the listserv, Gebru expressed her disappointment to Jeff Dean and Kacholia, and sent them a list of requirements for her to continue working at Google. One requirement was to reveal the anonymous authors of the feedback on her paper. Kacholia responded on December 2, 2020, saying that the demands could not be met, and that she accepted Gebru’s resignation. From Gebru’s perspective, she had been fired, and she believed it was retaliation for the email she sent to Google Brain Women and Allies rather than because of her demands. She tweeted that Kacholia had asked her to not return to work after her Thanksgiving vacation “because certain aspects of the email [she] sent... in the brain group reflect behavior that is inconsistent with the expectations of a Google manager.”

On December 3, 2020, Jeff Dean sent an email to the Google Research Division to provide some clarity about the circumstances of the paper retraction and Gebru’s departure. Dean wrote that the paper “was only shared with a day’s notice before its deadline...and then instead of awaiting reviewer feedback, it was approved for submission and submitted,” which went against Google research procedures. Dean also mentioned that the paper neglected to incorporate recent research on language processing models that addressed some of the issues identified in the paper. However, another researcher at Google Brain later tweeted that his submissions "were always checked for disclosure of sensitive material, never for the quality of the literature review," implying that the rule that Dean believes justify the paper retraction is only selectively applied at Google.

After justifying the paper retraction, Dean states in his email that Gebru was not fired, but rather resigned of her own volition. He also expresses disappointment in the email Gebru sent to the listserv,  imploring employees to keep working on internal diversity initiatives.

Initial backlash
Many Google employees were outraged by Gebru’s firing. On December 3rd, 2020, a petition was created by Google Walkout for Real Change asking Google for answers about Google's response to Gebru's paper, increased transparency moving forward, and to renew its commitment to ethical artificial intelligence. As of April 25th 2021, the petition had 4302 signatures, 2695 of which are from Google employees. In December, the Google Ethics AI team wrote a letter to Google management. The letter, titled, “The Future of Ethical AI at Google Research” asks for engineering Vice President Megan Kacholia to be fired. The letter also requested more details from the decision to fire Gebru as well as an end to retaliatory punishment for the concerned employees speaking out. The team also requested Gebru be reinstated.

On December 16th, 2020, nine members of Congress, sent a letter to Google requesting more information about Gebru’s firing. The letter quotes Google’s memos and calls them out for being extremely vague. These members also ask for more detailed policies from Google to ensure that their research is conducted properly and helps identify discriminatory problems in its models.

After Google remains silent, employees begin to take more extreme action. On January 5th, David Baker, an engineering director at Google quit in protest of Gebru’s firing. On February 3rd, Vinesh Kannan, a Google software engineer, quit. In his resignation, he cited the firing of Gebru and April Christina Curley. Curley was a Google recruiter who had been fired in 2020.

Google responds
On February 17th, it was announced the Google Ethical AI team would be restructured. The group was not notified internally of the changes until after the news broke. Vice President Megan Kacholia was fired and replaced by Marian Croak.

Dr. Margaret Mitchell, another prominent AI researcher who founded the Google Ethical AI team, was especially vocal on twitter about Gebru’s firing. In January 2021, Mitchell’s Google email access was blocked and she was placed on administrative leave. On February 19th, 2021, Mitchell was officially fired. A Google spokesperson said that she was fired because an official review found that she violated Google’s code of conduct and security policies. When asked her reaction to Google’s statement, Mitchell replied, “I don’t know about those allegations actually. I mean most of that is news to me.” Her firing sparked another wave of outrage amongst artificial intelligence researchers. On March 8th, 2021, the Google Walkout for Real change wrote an open letter. Instead of targeting Google, the letter attempts to place outside pressure on Google. The open letter asks academic conferences to require research submissions to also include corporate research policies and forbid lawyers from editing the papers. The letter also announces the #RecruitMeNot campaign where people interviewing with Google withdraw their application, specifically citing concerns from the firing of Gebru and Mitchell. Researchers are asked to refuse funding from Google and the open letter requests whistleblower protections be strengthened on a state and national level.

Continued backlash
On February 26th, the FAccT Conference decided to remove Google from their list of sponsors, citing the recent controversy with the Ethical AI team. The conference is one of the first groups to speak out against Google and shows the industry is paying attention to the controversy. On March 12th, researchers, including Dr. Hadas Kress-Gazit and Dr. Scott Niekum, begin to withdraw from Google’s Machine Learning and Robot Safety Workshop. AI researchers outside of Google are beginning to take a more definitive stance. On April 6th, Google manager Samy Bengio announced his resignation. While he did not mention the firing of Gebru and Mitchell in his resignation letter, he had previously wrote on social media, “I stand by you, Timnit.”

Research censorship
Google had a large amount of control over the research of the Ethical AI team. The company could block research papers from being presented at research conferences, creating a potential conflict of interest. Research findings may reveal shortcomings in Google's own AI models and it is important these discoveries are presented to help fix the same issues that may exist in AI models elsewhere.

Oppressive work environments
Google management gave company image a very high priority. When forced to make tradeoffs, the short-term reputation of the company came before Google's own employees and helping make text-based AI more ethical.

Abuses of power
Google policies when submitting to research conferences were only selectively enforced. Management refused to give specific feedback to help improve the research paper so it could be approved and kept the team in the dark about who was making these decisions and why.

Further Work
This controversy is ongoing. Future work could continue to keep up with any further backlash that occurs as well as if the petition and open letters seem to have lasting effects, either at Google or elsewhere. These concerns may die out or lead to lasting policy changes at research conferences and tech companies.