CONVERSATIONAL AI

RESEARCH

What makes a good conversation? How controllable attributes affect human judgments

May 29, 2019

Abstract

A good conversation requires balance – between simplicity and detail; staying on topic and changing it; asking questions and answering them. Although dialogue agents are commonly evaluated via human judgments of overall quality, the relationship between quality and these individual factors is less well-studied. In this work, we examine two controllable neural text generation methods, conditional training and weighted decoding, in order to control four important attributes for chitchat dialogue: repetition, specificity, response-relatedness and question-asking. We conduct a large-scale human evaluation to measure the effect of these control parameters on multi-turn interactive conversations on the PersonaChat task. We provide a detailed analysis of their relationship to high-level aspects of conversation, and show that by controlling combinations of these variables our models obtain clear improvements in human quality judgments.

Download the Paper

AUTHORS

Written by

Douwe Kiela

Abi See

Jason Weston

Stephen Roller

Publisher

NAACL

Related Publications

July 23, 2024

HUMAN & MACHINE INTELLIGENCE

CONVERSATIONAL AI

The Llama 3 Herd of Models

Llama team

July 23, 2024

May 06, 2024

CONVERSATIONAL AI

NLP

GAIA: a benchmark for general AI assistants

Gregoire Mialon, Yann LeCun, Thomas Scialom, Clémentine Fourrier, Thomas Wolf

May 06, 2024

April 23, 2024

CONVERSATIONAL AI

GRAPHICS

Generating Illustrated Instructions

Sachit Menon, Ishan Misra, Rohit Girdhar

April 23, 2024

April 05, 2024

CONVERSATIONAL AI

NLP

MART: Improving LLM Safety with Multi-round Automatic Red-Teaming

Suyu Ge, Chunting Zhou, Rui Hou, Madian Khabsa, Yi-Chia Wang, Qifan Wang, Jiawei Han, Yuning Mao

April 05, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.