Graphics

Computer Vision

Ego4D: Around the World in 3,000 Hours of Egocentric Video

October 14, 2021

Abstract

We introduce Ego4D, a massive-scale egocentric video dataset and benchmark suite. It offers 3,025 hours of dailylife activity video spanning hundreds of scenarios (household, outdoor, workplace, leisure, etc.) captured by 855 unique camera wearers from 74 worldwide locations and 9 different countries. The approach to collection is designed to uphold rigorous privacy and ethics standards with consenting participants and robust de-identification procedures where relevant. Ego4D dramatically expands the volume of diverse egocentric video footage publicly available to the research community. Portions of the video are accompanied by audio, 3D meshes of the environment, eye gaze, stereo, and/or synchronized videos from multiple egocentric cameras at the same event. Furthermore, we present a host of new benchmark challenges centered around understanding the first-person visual experience in the past (querying an episodic memory), present (analyzing hand-object manipulation, audio-visual conversation, and social interactions), and future (forecasting activities). By publicly sharing this massive annotated dataset and benchmark suite, we aim to push the frontier of first-person perception.
Project Page:https://ego4d-data.org/

Download the Paper

AUTHORS

Written by

Kristen Grauman

Andrew Westbury

Eugene Byrne

Zachary Chavis

Antonino Furnari

Rohit Girdhar

Jackson Hamburger

Hao Jiang

Miao Liu

Xingyu Liu

Miguel Martin

Tushar Nagarajan

Ilija Radosavovic

Santhosh Ramakrishnan

Fiona Ryan

Jayant Sharma

Michael Wray

Mengmeng Xu

Eric Zhongcong Xu

Chen Zhao

Siddhant Bansal

Dhruv Batra

Vincent Cartillier

Sean Crane

Tien Do

Akshay Erapall

Christoph Feichtenhofer

Adriano Fragomeni

Qichen Fu

Christian Fuegen

Abrham Gebreselasie

Cristina Gonzalez

James Hillis

Xuhua Huang

Yifei Huang

Wenqi Jia

Weslie Khoo

Jachym Kolar

Satwik Kottur

Anurag Kumar

Federico Landini

Chao Li

Zhenqiang Li

Karttikeya Mangalam

Raghava Modhugu

Jonathan Munro

Tullie Murrell

Takumi Nishiyasu

Will Price

Paola Ruiz Puentes

Merey Ramazanova

Leda Sari

Kiran Somasundaram

Audrey Southerland

Yusuke Sugano

Ruijie Tao

Minh Vo

Yuchen Wang

Xindi Wu

Takuma Yagi

Yunyi Zhu

Pablo Arbelaez

David Crandall

Dima Damen

Giovanni Maria Farinella

Bernard Ghanem

Vamsi Krishna Ithapu

C. V. Jawahar

Hanbyul Joo

Kris Kitani

Haizhou Li

Richard Newcombe

Aude Oliva

Hyun Soo Park

James M. Rehg

Yoichi Sato

Jianbo Shi

Mike Zheng Shou

Antonio Torralba

Lorenzo Torresani

Mingfei Yan

Jitendra Malik

Publisher

arXiv

Research Topics

Computer Vision

Graphics

Related Publications

October 18, 2025

NLP

Controlling Multimodal LLMs via Reward-guided Decoding

Oscar Mañas, Pierluca D'Oro, Koustuv Sinha, Adriana Romero Soriano, Michal Drozdzal, Aishwarya Agrawal

October 18, 2025

September 23, 2025

NLP

MetaEmbed: Scaling Multimodal Retrieval at Test-Time with Flexible Late Interactions

Zilin Xiao, Qi Ma, Mengting Gu, Jason Chen, Xintao Chen, Vicente Ordonez, Vijai Mohan

September 23, 2025

August 14, 2025

Computer Vision

DINOv3

Oriane Siméoni, Huy V. Vo, Maximilian Seitzer, Federico Baldassarre, Maxime Oquab, Cijo Jose, Vasil Khalidov, Marc Szafraniec, Seungeun Yi, Michaël Ramamonjisoa, Francisco Massa, Daniel Haziza, Luca Wehrstedt, Jianyuan Wang, Timothée Darcet, Theo Moutakanni, Leonel Sentana, Claire Roberts, Andrea Vedaldi, Jamie Tolan, John Brandt, Camille Couprie, Julien Mairal, Herve Jegou, Patrick Labatut, Piotr Bojanowski

August 14, 2025

August 13, 2025

Human & Machine Intelligence

Disentangling the Factors of Convergence between Brains and Computer Vision Models

Josephine Raugel, Marc Szafraniec, Huy V. Vo, Camille Couprie, Patrick Labatut, Piotr Bojanowski, Valentin Wyart, Jean Remi King

August 13, 2025

June 11, 2019

Computer Vision

ELF OpenGo: An Analysis and Open Reimplementation of AlphaZero | Facebook AI Research

Yuandong Tian, Jerry Ma, Qucheng Gong, Shubho Sengupta, Zhuoyuan Chen, James Pinkerton, Larry Zitnick

June 11, 2019

April 30, 2018

NLP

Computer Vision

Mastering the Dungeon: Grounded Language Learning by Mechanical Turker Descent | Facebook AI Research

Zhilin Yang, Saizheng Zhang, Jack Urbanek, Will Feng, Alexander H. Miller, Arthur Szlam, Douwe Kiela, Jason Weston

April 30, 2018

October 10, 2016

Speech & Audio

Computer Vision

Polysemous Codes | Facebook AI Research

Matthijs Douze, Hervé Jégou, Florent Perronnin

October 10, 2016

June 18, 2018

Speech & Audio

Computer Vision

Low-shot learning with large-scale diffusion | Facebook AI Research

Matthijs Douze, Arthur Szlam, Bharath Hariharan, Hervé Jégou

June 18, 2018

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.