Meredith Whittaker
AI Researcher, Open Research Group Founder · Google · 2019
Resigned citing retaliation for organizing the 2018 Google Walkout protesting sexual misconduct handling. Said Google forced her to 'abandon her work' on AI ethics. Wrote: 'It's clear Google isn't a place where I can continue this work.' Co-founded the AI Now Institute at NYU.
Sources
Key Publications
- AI Now 2018 ReportAI Now Institute (NYU)report
This annual report from the AI Now Institute at New York University examines the growing gap between the rapid deployment of AI systems across society and the inadequate accountability structures governing their use, focusing on domains where the stakes for affected individuals are highest. The report documents the expanding use of AI in government decision-making, including criminal justice, welfare eligibility, and immigration enforcement, and argues that many of these deployments lack meaningful transparency, due process protections, or mechanisms for affected individuals to challenge automated decisions. It also addresses the rise of AI-powered surveillance systems, including facial recognition technology deployed by law enforcement, and warns about the lack of regulation governing these tools and their disproportionate impact on communities of color. The authors call for banning the use of 'black box' AI systems in consequential government decisions, establishing meaningful accountability frameworks, and expanding the right of affected communities to challenge automated decisions. The report exemplifies the AI Now Institute's influential approach of combining empirical research with concrete policy recommendations, and its warnings about surveillance and automated decision-making have been borne out by subsequent controversies involving facial recognition, predictive policing, and algorithmic bias in public services.
- Discriminating Systems: Gender, Race, and Power in AIAI Now Institute (NYU)report
This report presents a detailed examination of how the AI industry's severe lack of diversity, particularly the underrepresentation of women, Black, and Latino workers, directly contributes to the development and deployment of biased AI systems that reinforce existing patterns of discrimination. The authors compile evidence from across the industry showing that the homogeneity of AI development teams leads to blind spots in system design, data collection, and evaluation practices, resulting in products that perform poorly for underrepresented groups and encode harmful stereotypes. The report documents specific cases where biased AI systems have caused real harm, from hiring algorithms that penalize women to criminal justice tools that assign higher risk scores to Black defendants, illustrating how technical bias and social inequality are mutually reinforcing. Beyond diagnosis, the authors propose structural interventions including increasing diversity in AI research and development, expanding the scope of AI bias research beyond technical fixes to address underlying power dynamics, and strengthening legal and regulatory frameworks to protect affected communities. The report has been widely cited in both academic and policy contexts and helped establish the argument that addressing AI bias requires confronting the social and institutional structures of the technology industry itself, not merely adjusting algorithms.