October 27, 2023
In the era of artificial intelligence, the role of large language models (LLMs) is becoming increasingly pivotal. Despite their widespread use, their proficiency to consolidate knowledge from different training documents — a crucial ability for many applications — remains unexplored. This is the first study that investigates LLMs’ capability to combine this information effectively within their parameter space. As such, we introduce EpiK-Eval, a unique question-answering benchmark designed to assess LLMs’ skill in formulating a coherent and consistent knowledge representation from segmented narratives. Evaluations using multiple LLMs expose significant deficiencies in this area. We argue that these shortcomings stem from the intrinsic nature of current training objectives. Consequently, we advocate for refining the approach towards knowledge consolidation, as it harbors the potential to dramatically improve their overall effectiveness and performance. The findings from this study offer insights for developing more robust and reliable LLMs.
Publisher
Arxiv
Research Topics
July 23, 2024
Llama team
July 23, 2024
June 25, 2024
Elena Voita, Javier Ferrando Monsonis, Christoforos Nalmpantis
June 25, 2024
June 25, 2024
Min-Jae Hwang, Ilia Kulikov, Benjamin Peloquin, Hongyu Gong, Peng-Jen Chen, Ann Lee
June 25, 2024
June 14, 2024
Sheng-Chieh Lin, Akari Asai, Minghan Li, Barlas Oguz, Jimmy Lin, Scott Yih, Xilun Chen
June 14, 2024
Product experiences
Foundational models
Product experiences
Latest news
Foundational models