CORE MACHINE LEARNING

The SSL Interplay: Augmentations, Inductive Bias, and Generalization

June 26, 2023

Abstract

Self-supervised learning (SSL) has emerged as a powerful framework to learn representations from raw data without supervision. Yet in practice, engineers face issues such as instability in tuning optimizers and collapse of representations during training. Such challenges motivate the need for a theory to shed light on the complex interplay between the choice of data augmentation, network architecture, and training algorithm. We study such an interplay with a precise analysis of generalization performance on both pretraining and downstream tasks in a theory friendly setup, and highlight several insights for SSL practitioners that arise from our theory.

Download the Paper

AUTHORS

Written by

Vivien Cabannes

Bobak Kiani

Randall Balestriero

Yann LeCun

Alberto Bietti

Publisher

ICML

Research Topics

Core Machine Learning

Related Publications

May 07, 2024

CORE MACHINE LEARNING

ReTaSA: A Nonparametric Functional Estimation Approach for Addressing Continuous Target Shift

Hwanwoo Kim, Xin Zhang, Jiwei Zhao, Qinglong Tian

May 07, 2024

April 04, 2024

CORE MACHINE LEARNING

DP-RDM: Adapting Diffusion Models to Private Domains Without Fine-Tuning

Jonathan Lebensold, Maziar Sanjabi, Pietro Astolfi, Adriana Romero Soriano, Kamalika Chaudhuri, Mike Rabbat, Chuan Guo

April 04, 2024

March 28, 2024

THEORY

CORE MACHINE LEARNING

On the Identifiability of Quantized Factors

Vitoria Barin Pacela, Kartik Ahuja, Simon Lacoste-Julien, Pascal Vincent

March 28, 2024

March 13, 2024

CORE MACHINE LEARNING

GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection

Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, Yuandong Tian

March 13, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.