NLP

CORE MACHINE LEARNING

Code Llama: Open Foundation Models for Code

August 24, 2023

Abstract

We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B and 34B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B and 13B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 53% and 55% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.

Download the Paper

AUTHORS

Written by

Baptiste Rozière

Jonas Gehring

Fabian Gloeckle

Sten Sootla

Itai Gat

Ellen Tan

Yossef (Yossi) Adi

Jingyu Liu

Tal Remez

Jérémy Rapin

Artyom Kozhevnikov

Ivan Evtimov

Joanna Bitton

Manish Bhatt

Cristian Canton Ferrer

Aaron Grattafiori

Wenhan Xiong

Alexandre Defossez

Jade Copet

Faisal Azhar

Hugo Touvron

Gabriel Synnaeve

Louis Martin

Nicolas Usunier

Thomas Scialom

Publisher

Meta AI

Research Topics

Natural Language Processing (NLP)

Core Machine Learning

Related Publications

July 23, 2024

HUMAN & MACHINE INTELLIGENCE

CONVERSATIONAL AI

The Llama 3 Herd of Models

Llama team

July 23, 2024

July 21, 2024

CORE MACHINE LEARNING

From Neurons to Neutrons: A Case Study in Mechanistic Interpretability

Ouail Kitouni, Niklas Nolte, Samuel Pérez Díaz, Sokratis Trifinopoulos, Mike Williams

July 21, 2024

July 08, 2024

THEORY

CORE MACHINE LEARNING

An Adaptive Stochastic Gradient Method with Non-negative Gauss-Newton Stepsizes

Antonio Orvieto, Lin Xiao

July 08, 2024

June 25, 2024

NLP

Neurons in Large Language Models: Dead, N-gram, Positional

Elena Voita, Javier Ferrando Monsonis, Christoforos Nalmpantis

June 25, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.