REINFORCEMENT LEARNING

On the Importance of Hyperparameter Optimization for Model-based Reinforcement Learning

March 01, 2021

Abstract

Model-based Reinforcement Learning (MBRL) is a promising framework for learning control in a data-efficient manner. MBRL algorithms can be fairly complex due to the separate dynamics modeling and the subsequent planning algorithm, and as a result, they often possess tens of hyperparameters and architectural choices. For this reason, MBRL typically requires significant human expertise before it can be applied to new problems and domains. To alleviate this problem, we propose to use automatic hyperparameter optimization (HPO). We demonstrate that this problem can be tackled effectively with automated HPO, which we demonstrate to yield significantly improved performance compared to human experts. In addition, we show that tuning of several MBRL hyperparameters dynamically, i.e. during the training itself, further improves the performance compared to using static hyperparameters which are kept fixed for the whole training. Finally, our experiments provide valuable insights into the effects of several hyperparameters, such as plan horizon or learning rate and their influence on the stability of training and resulting rewards.

Download the Paper

AUTHORS

Written by

Baohe Zhang

Raghu Rajan

Luis Pineda

Nathan Lambert

André Biedenkapp

Kurtland Chua

Frank Hutter

Roberto Calandra

Publisher

AISTATS

Research Topics

Reinforcement Learning

Related Publications

December 12, 2024

REINFORCEMENT LEARNING

Zero-Shot Whole-Body Humanoid Control via Behavioral Foundation Models

Andrea Tirinzoni, Ahmed Touati, Jesse Farebrother, Mateusz Guzek, Anssi Kanervisto, Yingchen Xu, Alessandro Lazaric, Matteo Pirotta

December 12, 2024

August 16, 2024

THEORY

REINFORCEMENT LEARNING

Dual Approximation Policy Optimization

Zhihan Xiong, Maryam Fazel, Lin Xiao

August 16, 2024

July 01, 2024

REINFORCEMENT LEARNING

Behaviour Distillation

Andrei Lupu, Chris Lu, Robert Lange, Jakob Foerster

July 01, 2024

May 06, 2024

REINFORCEMENT LEARNING

COMPUTER VISION

Solving General Noisy Inverse Problem via Posterior Sampling: A Policy Gradient Viewpoint

Haoyue Tang, Tian Xie

May 06, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.