July 17, 2020
Characterizing the confidence of machine learning predictions unlocks models that know when they do not know. In this study, we propose a framework for assessing the quality of predictive distributions obtained using deep learning models. The framework enables representation of aleatory and epistemic uncertainty, and relies on simulated data to generate different sources of uncertainty. Finally, it enables quantitative evaluation of the performance of uncertainty estimation techniques. We demonstrate the proposed framework with a case study highlighting the insights one can gain from using this framework.
Written by
Jessica Ai
Beliz Gokkaya
Ilknur Kaynar Kabul
Audrey Flower
Ehsan Emamjomeh-Zadeh
Hannah Li
Li Chen
Neamah Hussein
Ousmane Dia
Sevi Baltaoglu
Erik Meijer
Publisher
International Conference on Machine Learning (ICML)
Research Topics
November 30, 2020
Koustuv Sinha, Christopher Pal, Nicolas Gontier, Siva Reddy
November 30, 2020
December 03, 2018
Gabriel Synnaeve, Daniel Gant, Jonas Gehring, Nicolas Carion, Nicolas Usunier, Vasil Khalidov, Vegard Mella, Zeming Lin
December 03, 2018
December 03, 2018
Gabriel Synnaeve, Zeming Lin, Jonas Gehring, Dan Gant, Vegard Mella, Vasil Khalidov, Nicolas Carion, Nicolas Usunier
December 03, 2018
May 06, 2019
Kenneth Marino, Abhinav Gupta, Rob Fergus, Arthur Szlam
May 06, 2019
April 24, 2017
Nicolas Usunier, Gabriel Synnaeve, Zeming Lin, Soumith Chintala
April 24, 2017
July 03, 2019
July 03, 2019
Foundational models
Our approach
Latest news
Foundational models