August 06, 2023
Standard language model training employs gold human documents or human-human interaction data, and treats all training data as positive examples. Growing evidence shows that even with very large amounts of positive training data, issues remain that can be alleviated with relatively small amounts of negative data – examples of what the model should not do. In this work, we propose a novel procedure to train with such data called the CRINGE loss (ContRastive Iterative Negative GEneration). We show the effectiveness of this approach across three different experiments on the tasks of safe generation, contradiction avoidance, and open-domain dialogue. Our models outperform multiple strong baselines and are conceptually simple, easy to train and implement.
Publisher
ACL
July 23, 2024
Llama team
July 23, 2024
July 01, 2024
Andrei Lupu, Chris Lu, Robert Lange, Jakob Foerster
July 01, 2024
May 06, 2024
Gregoire Mialon, Yann LeCun, Thomas Scialom, Clémentine Fourrier, Thomas Wolf
May 06, 2024
May 06, 2024
Haoyue Tang, Tian Xie
May 06, 2024
Product experiences
Foundational models
Product experiences
Latest news
Foundational models