July 18, 2023
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closedsource models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.
Written by
Louis Martin
Kevin Stone
Peter Albert
Amjad Almahairi
Yasmine Babaei
Nikolay Bashlykov
Soumya Batra
Dan Bikel
Lukas Blecher
Moya Chen
Guillem Cucurull
David Esiobu
Jude Fernandes
Jeremy Fu
Wenyin Fu
Brian Fuller
Cynthia Gao
Vedanuj Goswami
Naman Goyal
Anthony Hartshorn
Saghar Hosseini
Rui Hou
Marcin Kardas
Viktor Kerkez
Isabel Kloumann
Artem Korenev
Punit Singh Koura
Marie-Anne Lachaux
Thibaut Lavril
Jenya Lee
Diana Liskovich
Yinghai Lu
Xavier Martinet
Todor Mihaylov
Igor Molybog
Yixin Nie
Andrew Poulton
Jeremy Reizenstein
Kalyan Saladi
Alan Schelten
Ruan Silva
Ranjan Subramanian
Xiaoqing Ellen Tan
Binh Tang
Ross Taylor
Andrew Kuan
Puxin Xu
Zheng Yan
Iliyan Zarov
Yuchen Zhang
Melanie Kambadur
Sharan Narang
Aurelien Rodriguez
Robert Stojnic
Thomas Scialom
Publisher
arxiv
December 17, 2024
Jack Lin, Luyu Gao, Barlas Oguz, Wenhan Xiong, Jimmy Lin, Scott Yih, Xilun Chen
December 17, 2024
December 12, 2024
December 12, 2024
December 12, 2024
Artidoro Pagnoni, Ram Pasunuru, Pedro Rodriguez, John Nguyen, Benjamin Muller, Margaret Li, Chunting Zhou, Lili Yu, Jason Weston, Luke Zettlemoyer, Gargi Ghosh, Mike Lewis, Ari Holtzman, Srini Iyer
December 12, 2024
December 12, 2024
Melanie Sclar, Jane Yu, Maryam Fazel-Zarandi, Yulia Tsvetkov, Yonatan Bisk, Yejin Choi, Asli Celikyilmaz
December 12, 2024
Foundational models
Latest news
Foundational models