Research

Fairness On The Ground: Applying Algorithmic Fairness Approaches To Production Systems

March 11, 2021

Abstract

Many technical approaches have been proposed for ensuring that decisions made by machine learning systems are fair, but few of these proposals have been stress-tested in real-world systems. This paper presents an example of one team’s approach to the challenge of applying algorithmic fairness approaches to complex production systems within the context of a large technology company. We discuss how we disentangle normative questions of product and policy design (like, “how should the system trade off between different stakeholders’ interests and needs?”) from empirical questions of system implementation (like, “is the system achieving the desired tradeoff in practice?”). We also present an approach for answering questions of the latter sort, which allows us to measure how machine learning systems and human labelers are making these tradeoffs across different relevant groups. We hope our experience integrating fairness tools and approaches into large-scale and complex production systems will be useful to other practitioners facing similar challenges, and illuminating to academics and researchers looking to better address the needs of practitioners.

Download the Paper

AUTHORS

Written by

Chloé Bakalar

Renata Barreto

Stevie Bergman

Miranda Bogen

Bobbie Chern

Sam Corbett-Davies

Melissa Hall

Isabel Kloumann

Michelle Lam

Joaquin Quiñonero Candela

Manish Raghavan

Joshua Simons

Jonathan Tannen

Edmund Tong

Kate Vredenburgh

Jiejing Zhao

Related Publications

November 20, 2024

Conversational AI

Computer Vision

Llama Guard 3 Vision: Safeguarding Human-AI Image Understanding Conversations

Jianfeng Chi, Ujjwal Karn, Hongyuan Zhan, Eric Smith, Javier Rando, Yiming Zhang, Kate Plawiak, Zacharie Delpierre Coudert, Kartikeya Upasani, Mahesh Pasupuleti

November 20, 2024

November 20, 2024

NLP

Core Machine Learning

Llama Guard 3-1B-INT4: Compact and Efficient Safeguard for Human-AI Conversations

Igor Fedorov, Kate Plawiak, Lemeng Wu, Tarek Elgamal, Naveen Suda, Eric Smith, Hongyuan Zhan, Jianfeng Chi, Yuriy Hulovatyy, Kimish Patel, Zechun Liu, Yangyang Shi, Tijmen Blankevoort, Mahesh Pasupuleti, Bilge Soran, Zacharie Delpierre Coudert, Rachad Alao, Raghuraman Krishnamoorthi, Vikas Chandra

November 20, 2024

November 19, 2024

NLP

Adaptive Decoding via Latent Preference Optimization

Shehzaad Dhuliawala, Ilia Kulikov, Ping Yu, Asli Celikyilmaz, Jason Weston, Sainbayar Sukhbaatar, Jack Lanchantin

November 19, 2024

November 14, 2024

NLP

Core Machine Learning

A Survey on Deep Learning for Theorem Proving

Zhaoyu Li, Jialiang Sun, Logan Murphy, Qidong Su, Zenan Li, Xian Zhang, Kaiyu Yang, Xujie Si

November 14, 2024

April 08, 2021

Responsible AI

Integrity

Towards measuring fairness in AI: the Casual Conversations dataset

Caner Hazirbas, Joanna Bitton, Brian Dolhansky, Jacqueline Pan, Albert Gordo, Cristian Canton Ferrer

April 08, 2021

November 16, 2021

Computer Vision

How Meta is working to assess fairness in relation to race in the U.S. across its products and systems

Rachad Alao, Miranda Bogen, Jingang Miao Ilya Mironov, Jonathan Tannen

November 16, 2021

October 12, 2021

Computer Vision

LiRA: Learning Visual Speech Representations from Audio through Self-supervision

Pingchuan Ma, Rodrigo Mira, Stavros Petridis, Bjorn W. Schuller,Maja Pantic

October 12, 2021

October 14, 2021

Graphics

Computer Vision

Ego4D: Around the World in 3,000 Hours of Egocentric Video

Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Ramakrishnan, Fiona Ryan,Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, …

October 14, 2021

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.