NLP

LLaMA: Open and Efficient Foundation Language Models

February 24, 2023

Abstract

We introduce LLaMA, a collection of founda- tion language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla- 70B and PaLM-540B. We release all our models to the research community.

Download the Paper

AUTHORS

Written by

Faisal Azhar

Hugo Touvron

Armand Joulin

Aurelien Rodriguez

Baptiste Rozière

Eric Hambro

Gautier Izacard

Guillaume Lample

Marie-Anne Lachaux

Naman Goyal

Thibaut Lavril

Timothee Lacroix

Xavier Martinet

Edouard Grave

Publisher

ArXiV

Related Publications

June 14, 2024

NLP

How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval

Sheng-Chieh Lin, Akari Asai, Minghan Li, Barlas Oguz, Jimmy Lin, Scott Yih, Xilun Chen

June 14, 2024

June 14, 2024

NLP

SYSTEMS RESEARCH

LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding

Mostafa Elhoushi, Akshat Shrivastava, Diana Liskovich, Basil Hosmer, Bram Wasti, Liangzhen Lai, Nas Mahmoud, Bilge Acun, Saurabh Agarwal, Ahmed Roman, Ahmed Aly, Beidi Chen, Carole-Jean Wu

June 14, 2024

June 13, 2024

NLP

Know When To Stop: A Study of Semantic Drift in Text Generation

Ava Spataru, Eric Hambro, Lena Voita, Nicola Cancedda

June 13, 2024

May 24, 2024

SPEECH & AUDIO

NLP

DOC-RAG: ASR Language Model Personalization with Domain-Distributed Co-occurrence Retrieval Augmentation

Zhe Liu

May 24, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.