REINFORCEMENT LEARNING

MADE: Exploration via Maximizing Deviation from Explored Regions

November 01, 2021

Abstract

In online reinforcement learning (RL), efficient exploration remains particularly challenging in high-dimensional environments with sparse rewards. In low-dimensional environments, where tabular parameterization is possible, count-based upper confidence bound (UCB) exploration methods achieve minimax near-optimal rates. However, it remains unclear how to efficiently implement UCB in realistic RL tasks that involve nonlinear function approximation. To address this, we propose a new exploration approach via maximizing the deviation of the occupancy of the next policy from the explored regions. We add this term as an adaptive regularizer to the standard RL objective to trade off between exploration and exploitation. We pair the new objective with a provably convergent algorithm, giving rise to a new intrinsic reward that adjusts existing bonuses. The proposed intrinsic reward is easy to implement and combine with other existing RL algorithms to conduct exploration. As a proof of concept, we evaluate the new intrinsic reward on tabular examples across a variety of model-based and model-free algorithms, showing improvements over count-only exploration strategies. When tested on navigation and locomotion tasks from MiniGrid and DeepMind Control Suite benchmarks, our approach significantly improves sample efficiency over state-of-the-art methods.

Download the Paper

AUTHORS

Written by

Tianjun Zhang

Paria Rashidinejad

Jiantao Jiao

Yuandong Tian

Joseph E Gonzalez

Stuart Russell

Publisher

NeurIPS

Research Topics

Reinforcement Learning

Related Publications

January 06, 2024

RANKING AND RECOMMENDATIONS

REINFORCEMENT LEARNING

Learning to bid and rank together in recommendation systems

Geng Ji, Wentao Jiang, Jiang Li, Fahmid Morshed Fahid, Zhengxing Chen, Yinghua Li, Jun Xiao, Chongxi Bao, Zheqing (Bill) Zhu

January 06, 2024

December 11, 2023

REINFORCEMENT LEARNING

CORE MACHINE LEARNING

TaskMet: Task-driven Metric Learning for Model Learning

Dishank Bansal, Ricky Chen, Mustafa Mukadam, Brandon Amos

December 11, 2023

October 01, 2023

REINFORCEMENT LEARNING

CORE MACHINE LEARNING

Q-Pensieve: Boosting Sample Efficiency of Multi-Objective RL Through Memory Sharing of Q-Snapshots

Wei Hung, Bo-Kai Huang, Ping-Chun Hsieh, Xi Liu

October 01, 2023

September 12, 2023

RANKING AND RECOMMENDATIONS

REINFORCEMENT LEARNING

Optimizing Long-term Value for Auction-Based Recommender Systems via On-Policy Reinforcement Learning

Bill Zhu, Alex Nikulkov, Dmytro Korenkevych, Fan Liu, Jalaj Bhandari, Ruiyang Xu, Urun Dogan

September 12, 2023

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.