ROBOTICS

REINFORCEMENT LEARNING

VER: Scaling On-Policy RL Leads to the Emergence of Navigation in Embodied Rearrangement

November 28, 2022

Abstract

We present Variable Experience Rollout (VER), a technique for efficiently scaling batched on-policy reinforcement learning in heterogenous environments (where different environments take vastly different times for generating rollouts) to many GPUs residing on, potentially, many machines. VER combines the strengths of and blurs the line between synchronous and asynchronous on-policy RL methods (SyncOnRL and AsyncOnRL, respectively). Specifically, it learns from on-policy experience (like SyncOnRL) and has no synchronization points (like AsyncOnRL), enabling high throughput. We find that VER leads to significant and consistent speed-ups across a broad range of embodied navigation and mobile manipulation tasks in photorealistic 3D simulation environments. Specifically, for PointGoal navigation and ObjectGoal navigation in Habitat 1.0, VER is 60-100% faster (1.6-2x speedup) than DD-PPO, the current state of art for distributed SyncOnRL, with similar sample efficiency. For mobile manipulation tasks (open fridge/cabinet, pick/place objects) in Habitat 2.0 VER is 150% faster (2.5x speedup) on 1 GPU and 170% faster (2.7x speedup) on 8 GPUs than DD-PPO. Compared to SampleFactory (the current state-of-the-art AsyncOnRL), VER matches its speed on 1 GPU, and is 70% faster (1.7x speedup) on 8 GPUs with better sample efficiency. We leverage these speed-ups to train chained skills for GeometricGoal rearrangement tasks in the Home Assistant Benchmark (HAB). We find a surprising emergence of navigation in skills that do not ostensible require any navigation. Specifically, the Pick skill involves a robot picking an object from a table. During training the robot was always spawned close to the table and never needed to navigate. However, we find that if base movement is part of the action space, the robot learns to navigate then pick an object in new environments with 50% success, demonstrating surprisingly high out-of-distribution generalization.

Download the Paper

AUTHORS

Written by

Dhruv Batra

Erik Wijmans

Irfan Essa

Publisher

NeurIPS

Research Topics

Reinforcement Learning

Robotics

Related Publications

June 18, 2023

ROBOTICS

REINFORCEMENT LEARNING

Galactic: Scaling End-to-End Reinforcement Learning for Rearrangement at 100k Steps-Per-Second

Vincent-Pierre Berges, Andrew Szot, Devendra Singh Chaplot, Aaron Gokaslan, Dhruv Batra, Eric Undersander

June 18, 2023

May 04, 2023

ROBOTICS

REINFORCEMENT LEARNING

MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations

Nicklas Hansen, Yixin Lin, Hao Su, Xiaolong Wang, Vikash Kumar, Aravind Rajeswaran

May 04, 2023

March 31, 2023

ROBOTICS

REINFORCEMENT LEARNING

PIRLNav: Pretraining with Imitation and RL Finetuning for ObjectNav

Ram Ramrakhya, Dhruv Batra, Erik Wijmans, Abhishek Das

March 31, 2023

March 29, 2023

ROBOTICS

Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?

Franziska Meier, Aravind Rajeswaran, Dhruv Batra, Jitendra Malik, Karmesh Yadav, Oleksandr Maksymets, Sergio Arnaud, Sneha Silwal, Vincent-Pierre Berges, Aryan Jain, Claire Chen, Jason Ma, Yixin Lin

March 29, 2023

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.