CORE MACHINE LEARNING

Linear unit-tests for invariance discovery

October 08, 2021

Abstract

There is an increasing interest in algorithms to learn invariant correlations across training environments. A big share of the current proposals find theoretical support in the causality literature but, how useful are they in practice? The purpose of this note is to propose six linear low-dimensional problems —“unit tests”— to evaluate different types of out-of-distribution generalization in a precise manner. Following initial experiments, none of three recently proposed alternatives passes all tests. By providing the code to automatically replicate all the results in this manuscript (https://www.github.com/facebookresearch/ InvarianceUnitTests), we hope that our unit tests become a standard stepping stone for researchers in out-of-distribution generalization. https://www.cmu.edu/dietrich/causality/neurips20ws/

Download the Paper

AUTHORS

Written by

Benjamin Charles Aubin

Aga Slowik

Leon Bottou

David Lopez-Paz

Publisher

Causality-Neurips-Workshop

Research Topics

Core Machine Learning

Related Publications

November 14, 2024

NLP

CORE MACHINE LEARNING

A Survey on Deep Learning for Theorem Proving

Zhaoyu Li, Jialiang Sun, Logan Murphy, Qidong Su, Zenan Li, Xian Zhang, Kaiyu Yang, Xujie Si

November 14, 2024

November 06, 2024

THEORY

CORE MACHINE LEARNING

The Road Less Scheduled

Aaron Defazio, Alice Yang, Harsh Mehta, Konstantin Mishchenko, Ahmed Khaled, Ashok Cutkosky

November 06, 2024

August 16, 2024

THEORY

REINFORCEMENT LEARNING

Dual Approximation Policy Optimization

Zhihan Xiong, Maryam Fazel, Lin Xiao

August 16, 2024

August 12, 2024

CORE MACHINE LEARNING

Contrastive Predict-and-Search for Mixed Integer Linear Programs

Arman Zharmagambetov, Yuandong Tian, Aaron Ferber, Bistra Dilkina, Taoan Huang

August 12, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.