ROBOTICS

Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots

March 12, 2024

Abstract

We present Habitat 3.0: a simulation platform for studying collaborative human-robot tasks in home environments. Habitat 3.0 offers contributions across three dimensions: (1) Accurate humanoid simulation: addressing challenges in modeling complex deformable bodies and diversity in appearance and motion, all while ensuring high simulation speed. (2) Human-in-the-loop infrastructure: enabling real human interaction with simulated robots via mouse/keyboard or a VR interface, facilitating evaluation of robot policies with human input. (3) Collaborative tasks: studying two collaborative tasks, Social Navigation and Social Rearrangement. Social Navigation investigates a robot's ability to locate and follow humanoid avatars in unseen environments, whereas Social Rearrangement addresses collaboration between a humanoid and robot while rearranging a scene. These contributions allow us to study end-to-end learned and heuristic baselines for human-robot collaboration in-depth, as well as evaluate them with humans in the loop. Our experiments demonstrate that learned robot policies lead to efficient task completion when collaborating with unseen humanoid agents and human partners that might exhibit behaviors that the robot has not seen before. Additionally, we observe emergent behaviors during collaborative task execution, such as the robot yielding space when obstructing a humanoid agent, thereby allowing the effective completion of the task by the humanoid agent. Furthermore, our experiments using the human-in-the-loop tool demonstrate that our automated evaluation with humanoids can provide an indication of the relative ordering of different policies when evaluated with real human collaborators. Habitat 3.0 unlocks interesting new features in simulators for Embodied AI, and we hope it paves the way for a new frontier of embodied human-AI interaction capabilities.

Download the Paper

AUTHORS

Written by

Andrew Szot

Ruslan Partsey

Vladimír Vondruš

Zsolt Kira

Roozbeh Mottaghi

Akshara Rai

Alexander William Clegg

Devendra Singh Chaplot

Dhruv Batra

Eric Undersander

Jimmy Yang

Jitendra Malik

Mikael Dallaire Cote

Mrinal Kalakrishnan

Oleksandr Maksymets

Ruta Desai

Theo Gervet

Tiffany Min

Unnat Jain

Vincent-Pierre Berges

Xavi Puig

Publisher

ICLR

Research Topics

Robotics

Related Publications

June 11, 2025

ROBOTICS

COMPUTER VISION

CausalVQA: A Physically Grounded Causal Reasoning Benchmark for Video Models

Aaron Foss, Ammar Rizvi, Chloe Evans, Justine T. Kao, Koustuv Sinha, Sasha Mitts

June 11, 2025

June 11, 2025

ROBOTICS

RESEARCH

V-JEPA 2: Self-Supervised Video Models Enable Understanding, Prediction and Planning

Mojtaba Komeili, Sarath Chandar, Abha Gejji, Ada Martin, Adrien Bardes, Ammar Rizvi, Artem Zholus, Claire Roberts, Daniel Dugas, David Fan, Francisco Massa, Francois Robert Hogan, Franziska Meier, Kapil Krishnakumar, Koustuv Sinha, Marc Szafraniec, Matthew Muckley, Mido Assran, Michael Rabbat, Nicolas Ballas, Patrick Labatut, Piotr Bojanowski, Quentin Garrido, Russell Howes, Sergio Arnaud, Vasil Khalidov, Xiaodong Ma, Yann LeCun, Yong Li

June 11, 2025

April 17, 2025

ROBOTICS

RESEARCH

Locate 3D: Real-World Object Localization via Self-Supervised Learning in 3D

Ruslan Partsey, Ayush Jain, Ang Cao, Ishita Prasad, Aravind Rajeswaran, Abha Gejji, Ada Martin, Arjun Majumdar, Daniel Dugas, Franziska Meier, Krishna Murthy Jatavallabhula, Mido Assran, Mikael Henaff, Mike Rabbat, Mrinal Kalakrishnan, Nicolas Ballas, Oleksandr Maksymets, Paul McVay, Phillip Thomas, Alexander Sax, Sergio Arnaud, Vincent-Pierre Berges

April 17, 2025

October 31, 2024

HUMAN & MACHINE INTELLIGENCE

ROBOTICS

Digitizing Touch with an Artificial Multimodal Fingertip

Nolan Black, Romeo Mercado, Norb Tydingco, Gregg Kammerer, Ricardo Chavira, Eric Sanchez, Yitian Ding, Roberto Calandra, Mike Lambeta, Alexander Sohn, Ali Sengül, Byron Taylor, Dave Stroud, Haozhi Qi, Jake Khatha, Jitendra Malik, Kevin Sawyer, Kurt Jenkins, Kyle Most, Neal Stein, Thomas Craven-Bartle, Tingfan Wu, Victoria Rose Most

October 31, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.