April 17, 2025
With increasingly more powerful large language models (LLMs) and LLM-based agents tackling an ever-growing list of tasks, we envision a future where numerous different LLM agents work seamlessly with other AI agents or humans facilitating everyday life in myriad ways from problem-solving to planning, knowledge gathering, and learning. To navigate through various scenarios in different social context, such LLM agents needs to possess collaborative skills, such as effective communication, theory-of-mind, and yet such skills are often ignored in the predominant single-turn evaluation of LLMs. Moreover, improving collaborative skills of these social agents would require large amount of conversational data, which are often expensive and less controllable thus hard to collect. To bridge the gap between problem-solving and social collaboration skills of LLMs, we present Collaborative Reasoner(Coral), a framework to evaluate and improve collaborative reasoning skills of language models. In particular, tasks and metrics in Coral necessitates agents to disagree to incorrect solutions, convince their partner of a correct solution, and ultimately agree as a team to commit to a final solution. Through the evaluation of Coral on 5 collaborative reasoning tasks, we show that current models cannot consistently utilize collaboration to achieve better task performance, and their social behaviors stemming from the current post-training process make them less desirable under collaborative scenarios. To improve collaborative reasoning capabilities of LLMs, we propose a self-improvement approach using synthetic interaction data, and to facilitate synthetic conversational data generation at scale, we built Matrix, a scalable and robust multi-agent communication framework. We leverage Coral and Matrix to synthesize supervised- and preference-finetuning data, based on the conversation turns that enable an agent to convince their partner on the correct solutions. Our self-improvement approach is shown to be effective on general, math, scientific and social reasoning tasks, yielding improvements up to 29.4% over chain-of-thought performance of an equivalent single agent LLM. We release code for Coral and Matrix for future research on collaborative social agents.
Written by
Ansong Ni
Yang Li
Xinjie Lei
Dong Wang
Ramya Raghavendra
Gargi Ghosh
Asli Celikyilmaz
Publisher
arxiv
May 14, 2025
Linnea Evanson, Christine Bulteau, Mathilde Chipaux, Georg Dorfmüller, Sarah Ferrand-Sorbets, Emmanuel Raffo, Sarah Rosenberg, Pierre Bourdillon, Jean Remi King
May 14, 2025
May 13, 2025
Marlène Careil, Yohann Benchetrit, Jean-Rémi King
May 13, 2025
December 12, 2024
Melanie Sclar, Jane Yu, Maryam Fazel-Zarandi, Yulia Tsvetkov, Yonatan Bisk, Yejin Choi, Asli Celikyilmaz
December 12, 2024
November 20, 2024
Jianfeng Chi, Ujjwal Karn, Hongyuan Zhan, Eric Smith, Javier Rando, Yiming Zhang, Kate Plawiak, Zacharie Delpierre Coudert, Kartikeya Upasani, Mahesh Pasupuleti
November 20, 2024
Our approach
Latest news
Foundational models