ROBOTICS

EMERGENCE OF MAPS IN THE MEMORIES OF BLIND NAVIGATION AGENTS

February 01, 2023

Abstract

Animal navigation research posits that organisms build and maintain internal spatial representations, or maps, of their environment. We ask if machines – specifically, artificial intelligence (AI) navigation agents – also build implicit (or ‘mental’) maps. A positive answer to this question would (a) explain the surprising phenomenon in recent literature of ostensibly map-free neural-networks achieving strong performance, and (b) strengthen the evidence of mapping as a fundamental mechanism for navigation by intelligent embodied agents, whether they be biological or artificial. Unlike animal navigation, we can judiciously design the agent’s perceptual system and control the learning paradigm to nullify alternative navigation mechanisms. Specifically, we train ‘blind’ agents – with sensing limited to only egomotion and no other sensing of any kind – to perform PointGoal navigation (‘go to ∆x, ∆y’) via reinforcement learning. Our agents are composed of navigation-agnostic components (fully-connected and recurrent neural networks), and our experimental setup provides no inductive bias towards mapping. Despite these harsh conditions, we find that blind agents are (1) surprisingly effective navigators in new environments (∼95% success); (2) they utilize memory over long horizons (remembering ∼1,000 steps of past experience in an episode); (3) this memory enables them to exhibit intelligent behavior (following walls, detecting collisions, taking shortcuts); (4) there is emergence of maps and collision detection neurons in the representations of the environment built by a blind agent as it navigates; and (5) the emergent maps are selective and task dependent (e.g. the agent ‘forgets’ exploratory detours). Overall, this paper presents no new techniques for the AI audience, but a surprising finding, an insight, and an explanation.

Download the Paper

AUTHORS

Written by

Dhruv Batra

Ari Morcos

Manolis Savva

Erik Wijmans

Irfan Essa

Stefan Lee

Publisher

ICLR

Research Topics

Robotics

Related Publications

April 02, 2024

ROBOTICS

REINFORCEMENT LEARNING

MoDem-V2: Visuo-Motor World Models for Real-World Robot Manipulation

Patrick Lancaster, Nicklas Hansen, Aravind Rajeswaran, Vikash Kumar

April 02, 2024

March 26, 2024

ROBOTICS

REINFORCEMENT LEARNING

When should we prefer Decision Transformers for Offline Reinforcement Learning?

Prajjwal Bhargava, Rohan Chitnis, Alborz Geramifard, Shagun Sodhani, Amy Zhang

March 26, 2024

March 12, 2024

ROBOTICS

Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots

Xavi Puig, Eric Undersander, Andrew Szot, Mikael Dallaire Cote, Jimmy Yang, Ruslan Partsey, Ruta Desai, Alexander William Clegg, Tiffany Min, Vladimír Vondruš, Theo Gervet, Vincent-Pierre Berges, Oleksandr Maksymets, Zsolt Kira, Mrinal Kalakrishnan, Jitendra Malik, Devendra Singh Chaplot, Unnat Jain, Dhruv Batra, Akshara Rai, Roozbeh Mottaghi

March 12, 2024

December 10, 2023

ROBOTICS

REINFORCEMENT LEARNING

Accelerating Exploration with Unlabeled Prior Data

Qiyang Li, Jason Zhang, Dibya Ghosh, Amy Zhang, Sergey Levine

December 10, 2023

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.