REINFORCEMENT LEARNING

CORE MACHINE LEARNING

Q-Pensieve: Boosting Sample Efficiency of Multi-Objective RL Through Memory Sharing of Q-Snapshots

October 01, 2023

Abstract

Many real-world continuous control problems are in the dilemma of weighing the pros and cons, multi-objective reinforcement learning (MORL) serves as a generic framework of learning control policies for different preferences over objectives. However, the existing MORL methods either rely on multiple passes of explicit search for finding the Pareto front and therefore are not sample-efficient, or utilizes a shared policy network for coarse knowledge sharing among policies. To boost the sample efficiency of MORL, we propose -Pensieve, a policy improvement scheme that stores a collection of -snapshots to jointly determine the policy update direction and thereby enables data sharing at the policy level. We show that -Pensieve can be naturally integrated with soft policy iteration with convergence guarantee. To substantiate this concept, we propose the technique of replay buffer, which stores the learned -networks from the past iterations, and arrive at a practical actor-critic implementation. Through extensive experiments and an ablation study, we demonstrate that with much fewer samples, the proposed algorithm can outperform the benchmark MORL methods on a variety of MORL benchmark tasks.

Download the Paper

AUTHORS

Written by

Wei Hung

Bo-Kai Huang

Ping-Chun Hsieh

Xi Liu

Publisher

ICLR

Research Topics

Reinforcement Learning

Core Machine Learning

Related Publications

September 22, 2023

COMPUTER VISION

CORE MACHINE LEARNING

Common Corruption Robustness of Point Cloud Detectors: Benchmark and Enhancement

Shuangzhi Li, Zhijie Wang, Felix Xu, Qing Guo, Xingyu Li, Lei Ma

September 22, 2023

August 24, 2023

NLP

CORE MACHINE LEARNING

Code Llama: Open Foundation Models for Code

Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Ellen Tan, Yossef (Yossi) Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Defossez, Jade Copet, Faisal Azhar, Hugo Touvron, Gabriel Synnaeve, Louis Martin, Nicolas Usunier, Thomas Scialom

August 24, 2023

June 26, 2023

CORE MACHINE LEARNING

The SSL Interplay: Augmentations, Inductive Bias, and Generalization

Vivien Cabannes, Bobak Kiani, Randall Balestriero, Yann LeCun, Alberto Bietti

June 26, 2023

June 18, 2023

ROBOTICS

REINFORCEMENT LEARNING

Galactic: Scaling End-to-End Reinforcement Learning for Rearrangement at 100k Steps-Per-Second

Vincent-Pierre Berges, Andrew Szot, Devendra Singh Chaplot, Aaron Gokaslan, Dhruv Batra, Eric Undersander

June 18, 2023

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.