April 17, 2025
With increasingly more powerful large language models (LLMs) and LLM-based agents tackling an ever-growing list of tasks, we envision a future where numerous different LLM agents work seamlessly with other AI agents or humans facilitating everyday life in myriad ways from problem-solving to planning, knowledge gathering, and learning. To navigate through various scenarios in different social context, such LLM agents needs to possess collaborative skills, such as effective communication, theory-of-mind, and yet such skills are often ignored in the predominant single-turn evaluation of LLMs. Moreover, improving collaborative skills of these social agents would require large amount of conversational data, which are often expensive and less controllable thus hard to collect. To bridge the gap between problem-solving and social collaboration skills of LLMs, we present Collaborative Reasoner(Coral), a framework to evaluate and improve collaborative reasoning skills of language models. In particular, tasks and metrics in Coral necessitates agents to disagree to incorrect solutions, convince their partner of a correct solution, and ultimately agree as a team to commit to a final solution. Through the evaluation of Coral on 5 collaborative reasoning tasks, we show that current models cannot consistently utilize collaboration to achieve better task performance, and their social behaviors stemming from the current post-training process make them less desirable under collaborative scenarios. To improve collaborative reasoning capabilities of LLMs, we propose a self-improvement approach using synthetic interaction data, and to facilitate synthetic conversational data generation at scale, we built Matrix, a scalable and robust multi-agent communication framework. We leverage Coral and Matrix to synthesize supervised- and preference-finetuning data, based on the conversation turns that enable an agent to convince their partner on the correct solutions. Our self-improvement approach is shown to be effective on general, math, scientific and social reasoning tasks, yielding improvements up to 29.4% over chain-of-thought performance of an equivalent single agent LLM. We release code for Coral and Matrix for future research on collaborative social agents.
Written by
Ansong Ni
Asli Celikyilmaz
Dong Wang
Gargi Ghosh
Ramya Raghavendra
Xinjie Lei
Yang Li
Publisher
arxiv
May 06, 2026
Saarang Panchavati, Antoine Ratouchniak, Mingfang (Lucy) Zhang, Elisa Cascardi, Hubert Banville, Jarod Levy, Jean-Rémi King, Jérémy Rapin, Katelyn Begany, Marlene Careil, Simon Dahan, Stéphane d'Ascoli, Teon Brooks, Yohann Benchetrit
May 06, 2026
April 09, 2026
Lei Zhang, Junjiao Tian, Kunpeng Li, Jialiang Wang, Weifeng Chen, Yuxiao Bao, Julian McAuley, Manling Li, Zecheng He, Felix Xu, Markos Georgopoulos, Zhipeng Fan
April 09, 2026
March 26, 2026
Hubert Jacob Banville, Jean Remi King, Josephine Raugel, Jérémy Rapin, Katelyn Begany, Stéphane d'Ascoli, Teon Brooks, Yohann Benchetrit
March 26, 2026
February 27, 2026
Yifu Qiu, Holger Schwenk, Paul-Ambroise Duquenne
February 27, 2026

Our approach
Latest news
Foundational models