NLP

CORE MACHINE LEARNING

Code Llama: Open Foundation Models for Code

August 24, 2023

Abstract

We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B and 34B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B and 13B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 53% and 55% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.

Download the Paper

AUTHORS

Written by

Baptiste Rozière

Jonas Gehring

Fabian Gloeckle

Sten Sootla

Itai Gat

Ellen Tan

Yossef (Yossi) Adi

Jingyu Liu

Tal Remez

Jérémy Rapin

Artyom Kozhevnikov

Ivan Evtimov

Joanna Bitton

Manish Bhatt

Cristian Canton Ferrer

Aaron Grattafiori

Wenhan Xiong

Alexandre Defossez

Jade Copet

Faisal Azhar

Hugo Touvron

Gabriel Synnaeve

Louis Martin

Nicolas Usunier

Thomas Scialom

Publisher

Meta AI

Research Topics

Natural Language Processing (NLP)

Core Machine Learning

Related Publications

August 22, 2023

SPEECH & AUDIO

NLP

SeamlessM4T—Massively Multilingual & Multimodal Machine Translation

Seamless Communication, Loic Barrault, Andy Chung, David Dale, Ning Dong (AI), Paul-Ambroise Duquenne, Hady Elsahar, Hongyu Gong, Kevin Heffernan, John Hoffman, Christopher Klaiber, Peng-Jen Chen, Daniel Licht, Jean Maillard, Alice Rakotoarison, Kaushik Ram Sadagopan, Guillaume Wenzek, Abinesh Ramakrishnan, Alexandre Mourachko, Amanda Kallet, Ann Lee, Anna Sun, Bapi Akula, Benjamin Peloquin, Bernie Huang, Bokai Yu, Brian Ellis, Can Balioglu, Carleigh Wood, Changhan Wang, Christophe Ropers, Cynthia Gao, Daniel Li (FAIR), Elahe Kalbassi, Ethan Ye, Gabriel Mejia Gonzalez, Hirofumi Inaguma, Holger Schwenk, Igor Tufanov, Ilia Kulikov, Janice Lam, Jeff Wang (PM - AI), Juan Pino, Justin Haaheim, Justine Kao, Prangthip Hasanti, Kevin Tran, Maha Elbayad, Marta R. Costa-jussa, Mohamed Ramadan, Naji El Hachem, Onur Çelebi, Paco Guzmán, Paden Tomasello, Pengwei Li, Pierre Andrews, Ruslan Mavlyutov, Russ Howes, Safiyyah Saleem, Skyler Wang, Somya Jain, Sravya Popuri, Tuan Tran, Vish Vogeti, Xutai Ma, Yilin Yang

August 22, 2023

August 21, 2023

NLP

SONAR: Sentence-Level Multimodal and Language-Agnostic Representations

Paul-Ambroise Duquenne, Holger Schwenk, Benoit Sagot

August 21, 2023

August 19, 2023

NLP

EXPRESSO: A Benchmark and Analysis of Discrete Expressive Speech Resynthesis

Tu Anh Nguyen, Wei-Ning Hsu, Antony D'Avirro, Bowen Shi, Itai Gat, Maryam Fazel-Zarandi, Tal Remez, Jade Copet, Gabriel Synnaeve, Michael Hassid, Felix Kreuk, Yossef Mordechay Adi, Emmanuel Dupoux

August 19, 2023

August 19, 2023

NLP

MuAViC: A Multilingual Audio-Visual Corpus for Robust Speech Recognition and Robust Speech-to-Text Translation

Mohamed Anwar, Bowen Shi, Vedanuj Goswami, Wei-Ning Hsu, Juan Pino, Changhan Wang

August 19, 2023

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.