title: Onward! Another #GoogleWalkout Goodbye
In April, two of the organizers of the Google Walkout, Meredith Whittaker and Claire Stapleton, came forward with the stories of the retaliation they’ve faced as a result of speaking out at the company. Claire left Google in June—yesterday was Meredith’s last day.
Here’s the note she shared internally:
July 10th was my 13-year Google anniversary, and today is my last day.
My experience at Google shaped who I am and the path I’m on. It’s hard to overstate how grateful I am for the teachers, mentors, and friends along the way, or how surreal this moment is. I still can’t imagine my badge not working.
The reasons I’m leaving aren’t a mystery. I’m committed to the AI Now Institute, to my AI ethics work, and to organizing for an accountable tech industry — and it’s clear Google isn’t a place where I can continue this work.
This has been hard to accept, since this work urgently needs doing. Google is one of the most powerful organizations on the planet; I’ve had the privilege to see it grow from a few thousand committed people to the behemoth it is today.
The company has emerged as a global leader in AI (the result of some combination of strategy, luck, timing, and massive centralized data and compute resources). This has helped propel Google’s entry into “new markets” — healthcare, fossil fuels, city development and governance, transportation, and beyond.
The result is that Google, in the conventional pursuit of quarterly earnings, is gaining significant and largely unchecked power to impact our world (including in profoundly dangerous ways, such as accelerating the extraction of fossil fuels and the deployment of surveillance technology). I’m certain many in leadership — who learned what Google was and why it was great over a decade ago — don’t truly understand the direction in which Google is growing. Nor are they incentivized to.
How this vast power is used — who benefits and who bears the risk — is one of the most urgent social and political (and yes, technical) questions of our time. And we have a lot of work to do. The AI field is overwhelmingly white and male, and as the Walkout highlighted, there are systems in place that are keeping it that way. This, while marginalized populations bear most of the risks of biased or harmful AI. The AI industry and the tools it creates are already widening inequality, enriching the powerful and disadvantaging those who are struggling.
Addressing these problems, and making sure AI is just, accountable, and safe, will require serious structural change to how technology is developed and how tech corporations are run. Ethical principles and in-house ethical reviews are a positive step, but we need a lot more.
I’ve had an amazing time here. I climbed my way from an entry level role at Google in 2006 to an established position as a researcher and public voice on AI issues. I marshalled and presented evidence in the service of more accountable technology. I’m proud of what I did, and grateful to work with amazing colleagues.
I have tried hard to offer evidence and pathways for positive structural change, but over time I realized that my presence “at the table” was more about the appearance of an inclusive debate, rather than seriously contending with the problems in the company. In the meantime, the issues of AI, bias and inequity grew more urgent, and I became increasingly worried.
Part of my response was to co-found the AI Now Institute at NYU with Kate Crawford, establishing a home for rigorous research that could examine the social implications of AI, and communicate this to the public. This has been an unqualified success, and we’ve already had extraordinary impact across research and policy. The other part was to begin organizing: history shows that centralized power rarely concedes without collective action.
What began as an experiment — can we apply labor organizing to address tech’s ethical crisis? — became one of the most difficult and gratifying efforts I’ve ever been involved in. Organized tech workers — you! — have emerged as a force capable of making real change, pushing for public accountability, oversight, and meaningful equity. And this right when the world needs it most.
Leaving Google is deeply emotional for me, and I don’t know all of the ways I’ll miss it. I’m lucky because I get to continue my work at AI Now. And I’d be much sadder if I didn’t see many hundreds of Googlers establishing themselves as leaders, contributing their brilliance to organizing, and refusing to stand silent in the face of leadership’s dangerous complicity. Please, keep going!
The stakes are extremely high. The use of AI for social control and oppression is already emerging, even in the face of developers’ best of intentions. We have a short window in which to act, to build in real guardrails for these systems, before AI is built into our infrastructure and it’s too late.
I offer my unwavering support and love to those of you who continue to do amazing work here, and who have taken risks to support others. In solidarity with all of you who will continue this essential work within Google, I’ll close by offering an incomplete map of where I see future tech organizing moving.
 I use Google in place of Alphabet, as it’s more readable, and whatever the corporate structure, we’re talking about a single company that by and large relies on a shared set of centralized resources.
 I use the term “AI” loosely, to include machine learning and related technologies that rely on data and encoded assumptions to “understand” a given domain, topic area, etc., and work to apply this understanding to the interpretation and classification of novel data inputs.
 The sale of Cloud AI APIs means that the reach and implications of Google’s AI offerings spread well beyond the company’s official product offerings, and are largely obscure to Googlers and to those most affected by the use of these technologies.
 While I’m focusing on Google, for obvious reasons, this critique applied to a number of other large tech companies, including Amazon, Facebook, and Microsoft. Given the computational and data resources required to build AI at scale, there are only a handful of companies on the planet have the capacity to create it.
 See AI Now’s 2018 report for more on this: https://ainowinstitute.org/AI_Now_2018_Report.pdf
 My recent Congressional testimony expands on these points: https://republicans-science.house.gov/legislation/hearings/full-committee-hearing-artificial-intelligence-societal-and-ethical, as does AI Now’s Discriminating Systems report: https://ainowinstitute.org/discriminatingsystems.pdf