HUMAN & MACHINE INTELLIGENCE

CONVERSATIONAL AI

Collaborative Reasoner: Self-improving Social Agents with Synthetic Conversations

April 17, 2025

Abstract

With increasingly more powerful large language models (LLMs) and LLM-based agents tackling an ever-growing list of tasks, we envision a future where numerous different LLM agents work seamlessly with other AI agents or humans facilitating everyday life in myriad ways from problem-solving to planning, knowledge gathering, and learning. To navigate through various scenarios in different social context, such LLM agents needs to possess collaborative skills, such as effective communication, theory-of-mind, and yet such skills are often ignored in the predominant single-turn evaluation of LLMs. Moreover, improving collaborative skills of these social agents would require large amount of conversational data, which are often expensive and less controllable thus hard to collect. To bridge the gap between problem-solving and social collaboration skills of LLMs, we present Collaborative Reasoner(Coral), a framework to evaluate and improve collaborative reasoning skills of language models. In particular, tasks and metrics in Coral necessitates agents to disagree to incorrect solutions, convince their partner of a correct solution, and ultimately agree as a team to commit to a final solution. Through the evaluation of Coral on 5 collaborative reasoning tasks, we show that current models cannot consistently utilize collaboration to achieve better task performance, and their social behaviors stemming from the current post-training process make them less desirable under collaborative scenarios. To improve collaborative reasoning capabilities of LLMs, we propose a self-improvement approach using synthetic interaction data, and to facilitate synthetic conversational data generation at scale, we built Matrix, a scalable and robust multi-agent communication framework. We leverage Coral and Matrix to synthesize supervised- and preference-finetuning data, based on the conversation turns that enable an agent to convince their partner on the correct solutions. Our self-improvement approach is shown to be effective on general, math, scientific and social reasoning tasks, yielding improvements up to 29.4% over chain-of-thought performance of an equivalent single agent LLM. We release code for Coral and Matrix for future research on collaborative social agents.

Download the Paper

AUTHORS

Written by

Ansong Ni

Asli Celikyilmaz

Daniel Li (FAIR)

Dong Wang

Gargi Ghosh

Ramya Raghavendra

Ruta Desai

Xinjie Lei

Yang Li

Publisher

arxiv

Related Publications

May 06, 2026

HUMAN & MACHINE INTELLIGENCE

RESEARCH

NeuralBench: A Unifying Framework to Benchmark NeuroAI Models

Saarang Panchavati, Antoine Ratouchniak, Mingfang (Lucy) Zhang, Elisa Cascardi, Hubert Banville, Jarod Levy, Jean-Rémi King, Jérémy Rapin, Katelyn Begany, Marlene Careil, Simon Dahan, Stéphane d'Ascoli, Teon Brooks, Yohann Benchetrit

May 06, 2026

April 09, 2026

HUMAN & MACHINE INTELLIGENCE

COMPUTER VISION

Think in Strokes, Not Pixels: Process-Driven Image Generation via Interleaved Reasoning

Lei Zhang, Junjiao Tian, Kunpeng Li, Jialiang Wang, Weifeng Chen, Yuxiao Bao, Julian McAuley, Manling Li, Zecheng He, Felix Xu, Markos Georgopoulos, Zhipeng Fan

April 09, 2026

March 26, 2026

HUMAN & MACHINE INTELLIGENCE

A foundation model of vision, audition, and language for in-silico neuroscience

Hubert Jacob Banville, Jean Remi King, Josephine Raugel, Jérémy Rapin, Katelyn Begany, Stéphane d'Ascoli, Teon Brooks, Yohann Benchetrit

March 26, 2026

February 27, 2026

HUMAN & MACHINE INTELLIGENCE

RESEARCH

Unified Vision–Language Modeling via Concept Space Alignment

Yifu Qiu, Holger Schwenk, Paul-Ambroise Duquenne

February 27, 2026

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.