October 27, 2023
In the era of artificial intelligence, the role of large language models (LLMs) is becoming increasingly pivotal. Despite their widespread use, their proficiency to consolidate knowledge from different training documents — a crucial ability for many applications — remains unexplored. This is the first study that investigates LLMs’ capability to combine this information effectively within their parameter space. As such, we introduce EpiK-Eval, a unique question-answering benchmark designed to assess LLMs’ skill in formulating a coherent and consistent knowledge representation from segmented narratives. Evaluations using multiple LLMs expose significant deficiencies in this area. We argue that these shortcomings stem from the intrinsic nature of current training objectives. Consequently, we advocate for refining the approach towards knowledge consolidation, as it harbors the potential to dramatically improve their overall effectiveness and performance. The findings from this study offer insights for developing more robust and reliable LLMs.
Publisher
Arxiv
Research Topics
May 14, 2025
Linnea Evanson, Christine Bulteau, Mathilde Chipaux, Georg Dorfmüller, Sarah Ferrand-Sorbets, Emmanuel Raffo, Sarah Rosenberg, Pierre Bourdillon, Jean Remi King
May 14, 2025
April 25, 2025
Rulin Shao, Qiao Rui, Varsha Kishore, Niklas Muennighoff, Victoria Lin, Daniela Rus, Bryan Kian Hsiang Low, Sewon Min, Scott Yih, Pang Wei Koh, Luke Zettlemoyer
April 25, 2025
April 17, 2025
Ansong Ni, Ruta Desai, Yang Li, Xinjie Lei, Dong Wang, Ramya Raghavendra, Gargi Ghosh, Daniel Li (FAIR), Asli Celikyilmaz
April 17, 2025
April 04, 2025
Olga Golovneva, Tianlu Wang, Jason Weston, Sainbayar Sukhbaatar
April 04, 2025
Our approach
Latest news
Foundational models