October 27, 2023
In the era of artificial intelligence, the role of large language models (LLMs) is becoming increasingly pivotal. Despite their widespread use, their proficiency to consolidate knowledge from different training documents — a crucial ability for many applications — remains unexplored. This is the first study that investigates LLMs’ capability to combine this information effectively within their parameter space. As such, we introduce EpiK-Eval, a unique question-answering benchmark designed to assess LLMs’ skill in formulating a coherent and consistent knowledge representation from segmented narratives. Evaluations using multiple LLMs expose significant deficiencies in this area. We argue that these shortcomings stem from the intrinsic nature of current training objectives. Consequently, we advocate for refining the approach towards knowledge consolidation, as it harbors the potential to dramatically improve their overall effectiveness and performance. The findings from this study offer insights for developing more robust and reliable LLMs.
Publisher
Arxiv
Research Topics
February 07, 2025
The Omnilingual MT Team, Pierre Andrews, Mikel Artetxe, Mariano Coria Meglioli, Marta R. Costa-jussa, Joe Chuang, David Dale, Cynthia Gao, Jean Maillard, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Eduardo Sánchez, Yiannis Tsiamas, Arina Turkatenko, Albert Ventayol, Shireen Yates
February 07, 2025
February 06, 2025
Jarod Levy, Mingfang (Lucy) Zhang, Svetlana Pinet, Jérémy Rapin, Hubert Jacob Banville, Stéphane d'Ascoli, Jean Remi King
February 06, 2025
February 06, 2025
Mingfang (Lucy) Zhang, Jarod Levy, Stéphane d'Ascoli, Jérémy Rapin, F.-Xavier Alario, Pierre Bourdillon, Svetlana Pinet, Jean Remi King
February 06, 2025
January 04, 2025
January 04, 2025
Foundational models
Our approach
Latest news
Foundational models