October 08, 2021
There is an increasing interest in algorithms to learn invariant correlations across training environments. A big share of the current proposals find theoretical support in the causality literature but, how useful are they in practice? The purpose of this note is to propose six linear low-dimensional problems —“unit tests”— to evaluate different types of out-of-distribution generalization in a precise manner. Following initial experiments, none of three recently proposed alternatives passes all tests. By providing the code to automatically replicate all the results in this manuscript (https://www.github.com/facebookresearch/ InvarianceUnitTests), we hope that our unit tests become a standard stepping stone for researchers in out-of-distribution generalization. https://www.cmu.edu/dietrich/causality/neurips20ws/
Publisher
Causality-Neurips-Workshop
Research Topics
Core Machine Learning
November 06, 2024
Aaron Defazio, Alice Yang, Harsh Mehta, Konstantin Mishchenko, Ahmed Khaled, Ashok Cutkosky
November 06, 2024
August 16, 2024
Zhihan Xiong, Maryam Fazel, Lin Xiao
August 16, 2024
August 12, 2024
Arman Zharmagambetov, Yuandong Tian, Aaron Ferber, Bistra Dilkina, Taoan Huang
August 12, 2024
August 09, 2024
Emily Wenger, Eshika Saxena, Mohamed Malhou, Ellie Thieu, Kristin Lauter
August 09, 2024
Foundational models
Latest news
Foundational models