October 27, 2023
In the era of artificial intelligence, the role of large language models (LLMs) is becoming increasingly pivotal. Despite their widespread use, their proficiency to consolidate knowledge from different training documents — a crucial ability for many applications — remains unexplored. This is the first study that investigates LLMs’ capability to combine this information effectively within their parameter space. As such, we introduce EpiK-Eval, a unique question-answering benchmark designed to assess LLMs’ skill in formulating a coherent and consistent knowledge representation from segmented narratives. Evaluations using multiple LLMs expose significant deficiencies in this area. We argue that these shortcomings stem from the intrinsic nature of current training objectives. Consequently, we advocate for refining the approach towards knowledge consolidation, as it harbors the potential to dramatically improve their overall effectiveness and performance. The findings from this study offer insights for developing more robust and reliable LLMs.
Publisher
Arxiv
Research Topics
November 20, 2024
Igor Fedorov, Kate Plawiak, Lemeng Wu, Tarek Elgamal, Naveen Suda, Eric Smith, Hongyuan Zhan, Jianfeng Chi, Yuriy Hulovatyy, Kimish Patel, Zechun Liu, Yangyang Shi, Tijmen Blankevoort, Mahesh Pasupuleti, Bilge Soran, Zacharie Delpierre Coudert, Rachad Alao, Raghuraman Krishnamoorthi, Vikas Chandra
November 20, 2024
November 19, 2024
Shehzaad Dhuliawala, Ilia Kulikov, Ping Yu, Asli Celikyilmaz, Jason Weston, Sainbayar Sukhbaatar, Jack Lanchantin
November 19, 2024
November 14, 2024
Zhaoyu Li, Jialiang Sun, Logan Murphy, Qidong Su, Zenan Li, Xian Zhang, Kaiyu Yang, Xujie Si
November 14, 2024
October 04, 2024
Bandhav Veluri, Benjamin Peloquin, Bokai Yu, Hongyu Gong, Shyam Gollakota
October 04, 2024
Foundational models
Latest news
Foundational models