Fairness On The Ground: Applying Algorithmic Fairness Approaches To Production Systems

March 11, 2021


Many technical approaches have been proposed for ensuring that decisions made by machine learning systems are fair, but few of these proposals have been stress-tested in real-world systems. This paper presents an example of one team’s approach to the challenge of applying algorithmic fairness approaches to complex production systems within the context of a large technology company. We discuss how we disentangle normative questions of product and policy design (like, “how should the system trade off between different stakeholders’ interests and needs?”) from empirical questions of system implementation (like, “is the system achieving the desired tradeoff in practice?”). We also present an approach for answering questions of the latter sort, which allows us to measure how machine learning systems and human labelers are making these tradeoffs across different relevant groups. We hope our experience integrating fairness tools and approaches into large-scale and complex production systems will be useful to other practitioners facing similar challenges, and illuminating to academics and researchers looking to better address the needs of practitioners.

Download the Paper


Written by

Chloé Bakalar

Renata Barreto

Stevie Bergman

Miranda Bogen

Bobbie Chern

Sam Corbett-Davies

Melissa Hall

Isabel Kloumann

Michelle Lam

Joaquin Quiñonero Candela

Manish Raghavan

Joshua Simons

Jonathan Tannen

Edmund Tong

Kate Vredenburgh

Jiejing Zhao

Related Publications

July 23, 2024

Computer Vision

Imagine yourself: Tuning-Free Personalized Image Generation

Zecheng He, Bo Sun, Felix Xu, Haoyu Ma, Ankit Ramchandani, Vincent Cheung, Siddharth Shah, Anmol Kalia, Ning Zhang (AI), Peizhao Zhang, Roshan Sumbaly, Peter Vajda, Animesh Sinha

July 23, 2024

July 22, 2024

Human & Machine Intelligence

Conversational AI

The Llama 3 Herd of Models

Llama team

July 22, 2024

July 22, 2024

Systems Research

CYBERSECEVAL 3: Advancing the Evaluation of Cybersecurity Risks and Capabilities in Large Language Models

Shengye Wan, Cyrus Nikolaidis, Daniel Song, David Molnar, James Crnkovich, Jayson Grace, Manish Bhatt, Sahana Chennabasappa, Spencer Whitman, Stephanie Ding, Vlad Ionescu, Yue Li, Joshua Saxe

July 22, 2024

July 21, 2024

Core Machine Learning

From Neurons to Neutrons: A Case Study in Mechanistic Interpretability

Ouail Kitouni, Niklas Nolte, Samuel Pérez Díaz, Sokratis Trifinopoulos, Mike Williams

July 21, 2024

April 08, 2021

Responsible AI


Towards measuring fairness in AI: the Casual Conversations dataset

Caner Hazirbas, Joanna Bitton, Brian Dolhansky, Jacqueline Pan, Albert Gordo, Cristian Canton Ferrer

April 08, 2021

November 16, 2021

Computer Vision

How Meta is working to assess fairness in relation to race in the U.S. across its products and systems

Rachad Alao, Miranda Bogen, Jingang Miao Ilya Mironov, Jonathan Tannen

November 16, 2021

October 12, 2021

Computer Vision

LiRA: Learning Visual Speech Representations from Audio through Self-supervision

Pingchuan Ma, Rodrigo Mira, Stavros Petridis, Bjorn W. Schuller,Maja Pantic

October 12, 2021

October 14, 2021


Computer Vision

Ego4D: Around the World in 3,000 Hours of Egocentric Video

Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Ramakrishnan, Fiona Ryan,Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, …

October 14, 2021

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.