RESEARCH

NLP

CWM: An Open-Weights LLM for Research on Code Generation with World Models

September 24, 2025

Abstract

We release Code World Model (CWM), a 32-billion-parameter open-weights LLM, to advance research on code generation with world models. To improve code understanding beyond what can be learned from training on static code alone, we mid-train CWM on a large amount of observation-action trajectories from Python interpreter and agentic Docker environments, and perform extensive multi- task reasoning RL in verifiable coding, math, and multi-turn software engineering environments. With CWM, we provide a strong testbed for researchers to explore the opportunities world modeling affords for improving code generation with reasoning and planning in computational environments. We present first steps of how world models can benefit agentic coding, enable step-by-step simulation of Python code execution, and show early results of how reasoning can benefit from the latter. CWM is a dense, decoder-only LLM trained with a context size of up to 131 k tokens. Independent of its world modeling capabilities, CWM offers strong performance on general coding and math tasks: it reaches pass@1 scores of 65.8 % on SWE-bench Verified (with test-time scaling), 68.6 % on LiveCodeBench, 96.6 % on Math-500, and 76.0 % on AIME 2024. To support further research on code world modeling, we release model checkpoints after mid-training, SFT, and RL.

Download the Paper

AUTHORS

Written by

Jade Copet

Quentin Carbonneaux

Gal Cohen

Jonas Gehring

Jacob Kahn

Jannik Kossen

Felix Kreuk

Emily McMilin

Michel Meyer

Yuxiang Wei

David Zhang

Kunhao Zheng

Jordi Armengol Estape

Pedram Bashiri

Maximilian Beck

Pierre Chambon

Abhishek Charnalia

Chris Cummins

Juliette Decugis

Zacharias Fisches

François Fleuret

Fabian Gloeckle

Alex Gu

Michael Hassid

Daniel Haziza

Badr Youbi Idrissi

Christian Keller

Rahul Kindi

Hugh Leather

Gallil Maimon

Aram Markosyan

Francisco Massa

Pierre-Emmanuel Mazaré

Vegard Mella

Naila Murray

Keyur Muzumdar

Peter O'Hearn

Matteo Pagliardini

Dmitrii Pedchenko

Tal Remez

Volker Seeker

Marco Selvi

Oren Sultan

Sida Wang

Luca Wehrstedt

Ori Yoran

Lingming Zhang

Taco Cohen

Yossi Adi

Gabriel Synnaeve

Publisher

arXiv

Research Topics

Natural Language Processing (NLP)

Core Machine Learning

Related Publications

October 19, 2025

RESEARCH

NLP

Controlling Multimodal LLMs via Reward-guided Decoding

Oscar Mañas, Pierluca D'Oro, Koustuv Sinha, Adriana Romero Soriano, Michal Drozdzal, Aishwarya Agrawal

October 19, 2025

October 13, 2025

REINFORCEMENT LEARNING

RESEARCH

SPG: Sandwiched Policy Gradient for Masked Diffusion Language Models

Chenyu Wang, Paria Rashidinejad, DiJia Su, Song Jiang, Sid Wang, Siyan Zhao, Cai Zhou, Shannon Zejiang Shen, Feiyu Chen, Tommi Jaakkola, Yuandong Tian, Bo Liu

October 13, 2025

September 24, 2025

CONVERSATIONAL AI

REINFORCEMENT LEARNING

Compute as Teacher: Turning Inference Compute Into Reference-Free Supervision

Dulhan Jayalath, Shashwat Goel, Thomas Simon Foster, Parag Jain, Suchin Gururangan, Cheng Zhang, Anirudh Goyal, Alan Schelten

September 24, 2025

September 24, 2025

RESEARCH

NLP

Code World Model Preparedness Report

Daniel Song, Peter Ney, Cristina Menghini, Faizan Ahmad, Aidan Boyd, Nathaniel Li, Ziwen Han, Jean-Christophe Testud, Saisuke Okabayashi, Maeve Ryan, Jinpeng Miao, Hamza Kwisaba, Felix Binder, Spencer Whitman, Jim Gust, Esteban Arcaute, Dhaval Kapil, Jacob Kahn, Ayaz Minhas, Tristan Goodman, Lauren Deason, Alexander Vaughan, Shengjia Zhao, Summer Yue

September 24, 2025

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.