July 18, 2023
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closedsource models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.
Written by
Louis Martin
Kevin Stone
Peter Albert
Amjad Almahairi
Yasmine Babaei
Nikolay Bashlykov
Soumya Batra
Dan Bikel
Lukas Blecher
Moya Chen
Guillem Cucurull
David Esiobu
Jude Fernandes
Jeremy Fu
Wenyin Fu
Brian Fuller
Cynthia Gao
Vedanuj Goswami
Naman Goyal
Anthony Hartshorn
Saghar Hosseini
Rui Hou
Marcin Kardas
Viktor Kerkez
Isabel Kloumann
Artem Korenev
Punit Singh Koura
Marie-Anne Lachaux
Thibaut Lavril
Jenya Lee
Diana Liskovich
Yinghai Lu
Xavier Martinet
Todor Mihaylov
Igor Molybog
Yixin Nie
Andrew Poulton
Jeremy Reizenstein
Kalyan Saladi
Alan Schelten
Ruan Silva
Ranjan Subramanian
Xiaoqing Ellen Tan
Binh Tang
Ross Taylor
Andrew Kuan
Puxin Xu
Zheng Yan
Iliyan Zarov
Yuchen Zhang
Melanie Kambadur
Sharan Narang
Aurelien Rodriguez
Robert Stojnic
Thomas Scialom
Publisher
arxiv
April 17, 2025
Ansong Ni, Ruta Desai, Yang Li, Xinjie Lei, Dong Wang, Ramya Raghavendra, Gargi Ghosh, Daniel Li (FAIR), Asli Celikyilmaz
April 17, 2025
April 04, 2025
Olga Golovneva, Tianlu Wang, Jason Weston, Sainbayar Sukhbaatar
April 04, 2025
March 17, 2025
Zhaofeng Wu, Michihiro Yasunaga, Andrew Cohen, Yoon Kim, Asli Celikyilmaz, Marjan Ghazvininejad
March 17, 2025
March 13, 2025
Delong Chen, Samuel Cahyawijaya, Jianfeng Liu, Baoyuan Wang, Pascale Fung
March 13, 2025
Foundational models
Our approach
Latest news
Foundational models