REINFORCEMENT LEARNING

Mastering the Game of No-Press Diplomacy via Human-Regularized Reinforcement Learning and Planning

April 24, 2023

Abstract

No-press Diplomacy is a complex strategy game involving both cooperation and competition that has served as a benchmark for multi-agent AI research. While self-play reinforcement learning has resulted in numerous successes in purely adversarial games like chess, Go, and poker, self-play alone is insufficient for achieving optimal performance in domains involving cooperation with humans. We address this shortcoming by first introducing a planning algorithm we call DiL-piKL that regularizes a reward-maximizing policy toward a human imitation-learned policy. We prove that this is a no-regret learning algorithm under a modified utility function. We then show that DiL-piKL can be extended into a self-play reinforcement learning algorithm we call RL-DiL-piKL that provides a model of human play while simultaneously training an agent that responds well to this human model. We used RL-DiL-piKL to train an agent we name Diplodocus. In a 200-game no-press Diplomacy tournament involving 62 human participants spanning skill levels from beginner to expert, two Diplodocus agents both achieved a higher average score than all other participants who played more than two games, and ranked first and third according to an Elo ratings model.

Download the Paper

AUTHORS

Written by

Athul Paul Jacob

Alexander Miller

Adam Lerer

David Wu

Gabriele Farina

Jonathan Gray

Noam Brown

Publisher

ICLR

Research Topics

Reinforcement Learning

Related Publications

December 26, 2025

REINFORCEMENT LEARNING

NLP

Safety Alignment of LMs via Non-cooperative Games

Brandon Amos, Anselm Paulus, Arman Zharmagambetov, Ilia Kulikov, Ivan Evtimov, Kamalika Chaudhuri, Remi Munos

December 26, 2025

December 01, 2025

CONVERSATIONAL AI

REINFORCEMENT LEARNING

Rubric-Based Benchmarking and Reinforcement Learning for Advancing LLM Instruction Following

Amine Benhalloum, Hany Awadalla, Hejia Zhang, Hunter Lang, Julian Katz-Samuels, Karishma Mandyam, Licheng Yu, Manaal Faruqui, Maryam Fazel-Zarandi, Nanshu Wang, Qi Qi, Richard Yuanzhe Pang, Selina Xiaoliang Peng, Shengjie Bi, Shengyu Feng, Shishir G. Patil, Sopan Khosla, Sujan Gonugondla, Vincent Li, Wenzhe Li, Yuanhao Xiong, Yue Yu, Yun He, Yundi Qian

December 01, 2025

October 13, 2025

REINFORCEMENT LEARNING

RESEARCH

SPG: Sandwiched Policy Gradient for Masked Diffusion Language Models

Paria Rashidinejad, Cai Zhou, Tommi Jaakkola, DiJia Su, Bo Liu, Feiyu Chen, Chenyu Wang, Shannon Zejiang Shen, Sid Wang, Siyan Zhao, Song Jiang, Yuandong Tian

October 13, 2025

September 24, 2025

CONVERSATIONAL AI

REINFORCEMENT LEARNING

Compute as Teacher: Turning Inference Compute Into Reference-Free Supervision

Dulhan Jayalath, Suchin Gururangan, Cheng Zhang, Alan Schelten, Anirudh Goyal, Parag Jain, Shashwat Goel, Thomas Simon Foster

September 24, 2025

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.