November 23, 2022
We analyze new generalization bounds for deep learning models trained by transfer learning from a source to a target task. Our bounds utilize a quantity called the majority predictor accuracy, which can be computed efficiently from data. We show that our theory is useful in practice since it implies that the majority predictor accuracy can be used as a transferability measure, a fact that is also validated by our experiments.
Publisher
The International Symposium on Information Theory and Its Applications (ISITA)
May 07, 2024
Hwanwoo Kim, Xin Zhang, Jiwei Zhao, Qinglong Tian
May 07, 2024
April 04, 2024
Jonathan Lebensold, Maziar Sanjabi, Pietro Astolfi, Adriana Romero Soriano, Kamalika Chaudhuri, Mike Rabbat, Chuan Guo
April 04, 2024
March 28, 2024
Vitoria Barin Pacela, Kartik Ahuja, Simon Lacoste-Julien, Pascal Vincent
March 28, 2024
March 13, 2024
Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, Yuandong Tian
March 13, 2024
Product experiences
Foundational models
Product experiences
Latest news
Foundational models