CONVERSATIONAL AI

REINFORCEMENT LEARNING

The Cringe Loss: Learning what language not to model

August 06, 2023

Abstract

Standard language model training employs gold human documents or human-human interaction data, and treats all training data as positive examples. Growing evidence shows that even with very large amounts of positive training data, issues remain that can be alleviated with relatively small amounts of negative data – examples of what the model should not do. In this work, we propose a novel procedure to train with such data called the CRINGE loss (ContRastive Iterative Negative GEneration). We show the effectiveness of this approach across three different experiments on the tasks of safe generation, contradiction avoidance, and open-domain dialogue. Our models outperform multiple strong baselines and are conceptually simple, easy to train and implement.

Download the Paper

AUTHORS

Written by

Leo Adolphs

Tianyu Gao

Jing Xu

Kurt Shuster

Sainbayar Sukhbaatar

Jason Weston

Publisher

ACL

Research Topics

Conversational AI

Reinforcement Learning

Core Machine Learning

Related Publications

July 23, 2024

HUMAN & MACHINE INTELLIGENCE

CONVERSATIONAL AI

The Llama 3 Herd of Models

Llama team

July 23, 2024

July 01, 2024

REINFORCEMENT LEARNING

Behaviour Distillation

Andrei Lupu, Chris Lu, Robert Lange, Jakob Foerster

July 01, 2024

May 06, 2024

CONVERSATIONAL AI

NLP

GAIA: a benchmark for general AI assistants

Gregoire Mialon, Yann LeCun, Thomas Scialom, Clémentine Fourrier, Thomas Wolf

May 06, 2024

May 06, 2024

REINFORCEMENT LEARNING

COMPUTER VISION

Solving General Noisy Inverse Problem via Posterior Sampling: A Policy Gradient Viewpoint

Haoyue Tang, Tian Xie

May 06, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.