REINFORCEMENT LEARNING

CORE MACHINE LEARNING

Decision Transformer: Reinforcement Learning via Sequence Modeling

October 27, 2021

Abstract

We propose a hypothesis that effective policies can be learned from data without dynamic programming bootstrapping. To investigate this, we consider replacing traditional reinforcement learning (RL) algorithms -- which typically bootstrap against a learned value function -- with a simple sequence modeling objective. We train a transformer model on sequences of returns, states, and actions with an autoregressive prediction loss widely used in language modeling, reducing policy sampling to sequence generation. By training a transformer model using a supervised loss function, we can remove the need for dynamic programming bootstrapping, which is known to be unstable with function approximation. Furthermore, we can also leverage the simplicity, scalability, and long-range memory capabilities of transformers. Through experiments spanning a diverse set of offline RL benchmarks including Atari, OpenAI Gym, and Key-to-Door, we show that our Decision Transformer model can learn to generate diverse behaviors by conditioning on desired returns. In particular, our Decision Transformer, when conditioned with high desired returns, produces a policy that is competitive or better than state of the art model-free offline RL algorithms.

Download the Paper

AUTHORS

Written by

Lili Chen

Kevin Lu

Aravind Rajeswaran

Kimin Lee

Aditya Grover

Michael Laskin

Pieter Abbeel

Aravind Srinivas

Igor Mordatch

Publisher

NeurIPS

Research Topics

Reinforcement Learning

Core Machine Learning

Related Publications

December 09, 2024

NLP

CORE MACHINE LEARNING

Discrete flow matching

Itai Gat, Tal Remez, Felix Kreuk, Ricky Chen, Gabriel Synnaeve, Yossef (Yossi) Adi, Yaron Lipman, Neta Shaul

December 09, 2024

November 20, 2024

NLP

CORE MACHINE LEARNING

Llama Guard 3-1B-INT4: Compact and Efficient Safeguard for Human-AI Conversations

Igor Fedorov, Kate Plawiak, Lemeng Wu, Tarek Elgamal, Naveen Suda, Eric Smith, Hongyuan Zhan, Jianfeng Chi, Yuriy Hulovatyy, Kimish Patel, Zechun Liu, Yangyang Shi, Tijmen Blankevoort, Mahesh Pasupuleti, Bilge Soran, Zacharie Delpierre Coudert, Rachad Alao, Raghuraman Krishnamoorthi, Vikas Chandra

November 20, 2024

November 14, 2024

NLP

CORE MACHINE LEARNING

A Survey on Deep Learning for Theorem Proving

Zhaoyu Li, Jialiang Sun, Logan Murphy, Qidong Su, Zenan Li, Xian Zhang, Kaiyu Yang, Xujie Si

November 14, 2024

November 06, 2024

THEORY

CORE MACHINE LEARNING

The Road Less Scheduled

Aaron Defazio, Alice Yang, Harsh Mehta, Konstantin Mishchenko, Ahmed Khaled, Ashok Cutkosky

November 06, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.