October 08, 2021
There is an increasing interest in algorithms to learn invariant correlations across training environments. A big share of the current proposals find theoretical support in the causality literature but, how useful are they in practice? The purpose of this note is to propose six linear low-dimensional problems —“unit tests”— to evaluate different types of out-of-distribution generalization in a precise manner. Following initial experiments, none of three recently proposed alternatives passes all tests. By providing the code to automatically replicate all the results in this manuscript (https://www.github.com/facebookresearch/ InvarianceUnitTests), we hope that our unit tests become a standard stepping stone for researchers in out-of-distribution generalization. https://www.cmu.edu/dietrich/causality/neurips20ws/
Publisher
Causality-Neurips-Workshop
Research Topics
Core Machine Learning
July 21, 2024
Ouail Kitouni, Niklas Nolte, Samuel Pérez Díaz, Sokratis Trifinopoulos, Mike Williams
July 21, 2024
July 08, 2024
Antonio Orvieto, Lin Xiao
July 08, 2024
June 17, 2024
Heli Ben-Hamu, Omri Puny, Itai Gat, Brian Karrer, Uriel Singer, Yaron Lipman
June 17, 2024
June 17, 2024
Neta Shaul, Uriel Singer, Ricky Chen, Matt Le, Ali Thabet, Albert Pumarola, Yaron Lipman
June 17, 2024
Product experiences
Foundational models
Product experiences
Latest news
Foundational models