Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?

March 29, 2023


We present the largest and most comprehensive empirical study of pre-trained visual representations (PVRs) or visual ‘foundation models’ for Embodied AI. First, we curate CortexBench, consisting of 17 different tasks spanning locomotion, navigation, dexterous, and mobile manipulation. Next, we systematically evaluate existing PVRs and find that none is universally dominant. To study the effect of pre-training data scale and diversity, we combine over 4,000 hours of egocentric videos from 7 different sources (over 5.6M images) and ImageNet to train different-sized vision transformers using Masked Auto-Encoding (MAE) on slices of this data. Contrary to inferences from prior work, we find that scaling dataset size and diversity does not improve performance universally (but does so on average). Our largest model, named VC-1, outperforms all prior PVRs on average but does not universally dominate either. Finally, we show that task- or domain-specific adaptation of VC-1 leads to substantial gains, with VC-1 (adapted) achieving competitive or superior performance than the best known results on all of the benchmarks in CortexBench. These models required over 10,000 GPU-hours to train and can be found on our website for the benefit of the research community.

Download the Paper


Written by

Franziska Meier

Aravind Rajeswaran

Dhruv Batra

Jitendra Malik

Karmesh Yadav

Oleksandr Maksymets

Sergio Arnaud

Sneha Silwal

Vincent-Pierre Berges

Aryan Jain

Claire Chen

Jason Ma

Yixin Lin



Research Topics


Related Publications

October 12, 2023


SLAP: Spatial-Language Attention Policies

Christopher Paxton, Jay Vakil, Priyam Parashar, Sam Powers, Xiaohan Zhang, Yonatan Bisk, Vidhi Jain

October 12, 2023

July 10, 2023



StructDiffusion: Language-Guided Creation of Physically-Valid Structures using Unseen Objects

Weiyu Liu, Yilun Du, Tucker Hermans, Sonia Chernova, Christopher Paxton

July 10, 2023

June 18, 2023



Galactic: Scaling End-to-End Reinforcement Learning for Rearrangement at 100k Steps-Per-Second

Vincent-Pierre Berges, Andrew Szot, Devendra Singh Chaplot, Aaron Gokaslan, Dhruv Batra, Eric Undersander

June 18, 2023

May 04, 2023



MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations

Nicklas Hansen, Yixin Lin, Hao Su, Xiaolong Wang, Vikash Kumar, Aravind Rajeswaran

May 04, 2023

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.