REINFORCEMENT LEARNING

PIECEWISE LINEAR PARAMETRIZATION OF POLICIES: TOWARDS INTERPRETABLE DEEP REINFORCEMENT LEARNING

March 11, 2024

Abstract

Learning inherently interpretable policies is a central challenge in the path to developing autonomous agents that humans can trust. Linear policies can justify their decisions while interacting in a dynamic environment, but their reduced expressivity prevents them from solving hard tasks. Instead, we argue for the use of piecewise-linear policies. We carefully study to what extent they can retain the interpretable properties of linear policies while reaching competitive performance with neural baselines. In particular, we propose the HyperCombinator (HC), a piecewise-linear neural architecture expressing a policy with a controllably small number of sub-policies. Each sub-policy is linear with respect to interpretable features, shedding light on the decision process of the agent without requiring an additional explanation model. We evaluate HC policies in control and navigation experiments, visualize the improved interpretability of the agent and highlight its trade-off with performance. Moreover, we validate that the restricted model class that the HyperCombinator belongs to is compatible with the algorithmic constraints of various reinforcement learning algorithms.

Download the Paper

AUTHORS

Written by

Maxime Wabartha

Joelle Pineau

Publisher

ICLR or SATML

Research Topics

Reinforcement Learning

Related Publications

August 16, 2024

THEORY

REINFORCEMENT LEARNING

Dual Approximation Policy Optimization

Zhihan Xiong, Maryam Fazel, Lin Xiao

August 16, 2024

July 01, 2024

REINFORCEMENT LEARNING

Behaviour Distillation

Andrei Lupu, Chris Lu, Robert Lange, Jakob Foerster

July 01, 2024

May 06, 2024

REINFORCEMENT LEARNING

COMPUTER VISION

Solving General Noisy Inverse Problem via Posterior Sampling: A Policy Gradient Viewpoint

Haoyue Tang, Tian Xie

May 06, 2024

April 30, 2024

REINFORCEMENT LEARNING

Multi-Agent Diagnostics for Robustness via Illuminated Diversity

Mikayel Samvelyan, Minqi Jiang, Davide Paglieri, Jack Parker-Holder, Tim Rocktäschel

April 30, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.