HUMAN & MACHINE INTELLIGENCE

CONVERSATIONAL AI

Collaborative Reasoner: Self-improving Social Agents with Synthetic Conversations

April 17, 2025

Abstract

With increasingly more powerful large language models (LLMs) and LLM-based agents tackling an ever-growing list of tasks, we envision a future where numerous different LLM agents work seamlessly with other AI agents or humans facilitating everyday life in myriad ways from problem-solving to planning, knowledge gathering, and learning. To navigate through various scenarios in different social context, such LLM agents needs to possess collaborative skills, such as effective communication, theory-of-mind, and yet such skills are often ignored in the predominant single-turn evaluation of LLMs. Moreover, improving collaborative skills of these social agents would require large amount of conversational data, which are often expensive and less controllable thus hard to collect. To bridge the gap between problem-solving and social collaboration skills of LLMs, we present Collaborative Reasoner(Coral), a framework to evaluate and improve collaborative reasoning skills of language models. In particular, tasks and metrics in Coral necessitates agents to disagree to incorrect solutions, convince their partner of a correct solution, and ultimately agree as a team to commit to a final solution. Through the evaluation of Coral on 5 collaborative reasoning tasks, we show that current models cannot consistently utilize collaboration to achieve better task performance, and their social behaviors stemming from the current post-training process make them less desirable under collaborative scenarios. To improve collaborative reasoning capabilities of LLMs, we propose a self-improvement approach using synthetic interaction data, and to facilitate synthetic conversational data generation at scale, we built Matrix, a scalable and robust multi-agent communication framework. We leverage Coral and Matrix to synthesize supervised- and preference-finetuning data, based on the conversation turns that enable an agent to convince their partner on the correct solutions. Our self-improvement approach is shown to be effective on general, math, scientific and social reasoning tasks, yielding improvements up to 29.4% over chain-of-thought performance of an equivalent single agent LLM. We release code for Coral and Matrix for future research on collaborative social agents.

Download the Paper

AUTHORS

Written by

Ansong Ni

Ruta Desai

Yang Li

Xinjie Lei

Dong Wang

Ramya Raghavendra

Gargi Ghosh

Daniel Li (FAIR)

Asli Celikyilmaz

Publisher

arxiv

Related Publications

May 14, 2025

HUMAN & MACHINE INTELLIGENCE

SPEECH & AUDIO

Emergence of Language in the Developing Brain

Linnea Evanson, Christine Bulteau, Mathilde Chipaux, Georg Dorfmüller, Sarah Ferrand-Sorbets, Emmanuel Raffo, Sarah Rosenberg, Pierre Bourdillon, Jean Remi King

May 14, 2025

May 13, 2025

HUMAN & MACHINE INTELLIGENCE

RESEARCH

Dynadiff: Single-stage Decoding of Images from Continuously Evolving fMRI

Marlène Careil, Yohann Benchetrit, Jean-Rémi King

May 13, 2025

December 12, 2024

HUMAN & MACHINE INTELLIGENCE

NLP

Explore Theory-of-Mind: Program-Guided Adversarial Data Generation for Theory of Mind Reasoning

Melanie Sclar, Jane Yu, Maryam Fazel-Zarandi, Yulia Tsvetkov, Yonatan Bisk, Yejin Choi, Asli Celikyilmaz

December 12, 2024

November 20, 2024

CONVERSATIONAL AI

COMPUTER VISION

Llama Guard 3 Vision: Safeguarding Human-AI Image Understanding Conversations

Jianfeng Chi, Ujjwal Karn, Hongyuan Zhan, Eric Smith, Javier Rando, Yiming Zhang, Kate Plawiak, Zacharie Delpierre Coudert, Kartikeya Upasani, Mahesh Pasupuleti

November 20, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.