REINFORCEMENT LEARNING

NovelD: A Simple yet Effective Exploration Criterion

November 01, 2021

Abstract

Efficient exploration under sparse rewards remains a key challenge in deep reinforcement learning. Previous exploration methods (e.g., RND) have achieved strong results in multiple hard tasks. However, if there are multiple novel areas to explore, these methods often focus quickly on one without sufficiently trying others (like a depth-wise first search manner). In some scenarios (e.g., four corridor environment in Sec. 4.2), we observe they explore in one corridor for long and fail to cover all the states. On the other hand, in theoretical RL, with optimistic initialization and the inverse square root of visitation count as a bonus, it won’t suffer from this and explores different novel regions alternatively (like a breadth-first search manner). In this paper, inspired by this, we propose a simple but effective criterion called NovelD by weighting every novel area approximately equally. Our algorithm is very simple but yet shows comparable performance or even outperforms multiple SOTA exploration methods in many hard exploration tasks. Specifically, NovelD solves all the static procedurally-generated tasks in Mini-Grid with just 120M environment steps, without any curriculum learning. In comparison, the previous SOTA only solves 50% of them. NovelD also achieves SOTA on multiple tasks in NetHack, a rogue-like game that contains more challenging procedurally-generated environments. In multiple Atari games (e.g., MonteZuma’s Revenge, Venture, Gravitar), NovelD outperforms RND. We analyze NovelD thoroughly in MiniGrid and found that empirically it helps the agent explore the environment more uniformly with a focus on exploring beyond the boundary.

Download the Paper

AUTHORS

Written by

Tianjun Zhang

Huazhe Xu

Xiaolong Wang

Yi Wu

Kurt Keutzer

Joseph E. Gonzalez

Yuandong Tian

Publisher

NeurIPS

Research Topics

Reinforcement Learning

Related Publications

May 06, 2024

REINFORCEMENT LEARNING

COMPUTER VISION

Solving General Noisy Inverse Problem via Posterior Sampling: A Policy Gradient Viewpoint

Haoyue Tang, Tian Xie

May 06, 2024

April 30, 2024

REINFORCEMENT LEARNING

Multi-Agent Diagnostics for Robustness via Illuminated Diversity

Mikayel Samvelyan, Minqi Jiang, Davide Paglieri, Jack Parker-Holder, Tim Rocktäschel

April 30, 2024

April 02, 2024

ROBOTICS

REINFORCEMENT LEARNING

MoDem-V2: Visuo-Motor World Models for Real-World Robot Manipulation

Patrick Lancaster, Nicklas Hansen, Aravind Rajeswaran, Vikash Kumar

April 02, 2024

March 26, 2024

ROBOTICS

REINFORCEMENT LEARNING

When should we prefer Decision Transformers for Offline Reinforcement Learning?

Prajjwal Bhargava, Rohan Chitnis, Alborz Geramifard, Shagun Sodhani, Amy Zhang

March 26, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.