CORE MACHINE LEARNING

Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture

June 18, 2023

Abstract

This paper demonstrates an approach for learning highly semantic image representations without relying on hand-crafted data-augmentations. We introduce the Image based Joint-Embedding Predictive Architecture (I-JEPA), a non-generative approach for self-supervised learning from images. The idea behind I-JEPA is simple: from a single context block, predict the representations of various target blocks in the same image. A core design choice to guide I-JEPA towards producing semantic representations is the masking strategy; specifically, it is crucial to (a) sample target blocks with sufficiently large scale (semantic), and to (b) use a sufficiently informative (spatially distributed) context block. Empirically, when combined with Vision Transformers, we find I-JEPA to be highly scalable. For instance, we train a ViT-Huge/14 on ImageNet using 16 A100 GPUs in under 72 hours to achieve strong downstream performance across a wide range of tasks, from linear classification to object counting and depth prediction.

Download the Paper

AUTHORS

Written by

Mido Assran

Quentin Duval

Ishan Misra

Piotr Bojanowski

Pascal Vincent

Mike Rabbat

Yann LeCun

Nicolas Ballas

Publisher

CVPR

Research Topics

Core Machine Learning

Related Publications

May 07, 2024

CORE MACHINE LEARNING

ReTaSA: A Nonparametric Functional Estimation Approach for Addressing Continuous Target Shift

Hwanwoo Kim, Xin Zhang, Jiwei Zhao, Qinglong Tian

May 07, 2024

April 04, 2024

CORE MACHINE LEARNING

DP-RDM: Adapting Diffusion Models to Private Domains Without Fine-Tuning

Jonathan Lebensold, Maziar Sanjabi, Pietro Astolfi, Adriana Romero Soriano, Kamalika Chaudhuri, Mike Rabbat, Chuan Guo

April 04, 2024

March 28, 2024

THEORY

CORE MACHINE LEARNING

On the Identifiability of Quantized Factors

Vitoria Barin Pacela, Kartik Ahuja, Simon Lacoste-Julien, Pascal Vincent

March 28, 2024

March 13, 2024

CORE MACHINE LEARNING

GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection

Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, Yuandong Tian

March 13, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.