Core Machine Learning

Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding

July 18, 2021

Abstract

Latent variable models have been successfully applied in lossless compression with the bits-back coding algorithm. However, bits-back suffers from an increase in the bit rate equal to the KL divergence between the approximate posterior and the true posterior. In this paper, we show how to remove this gap asymptotically by deriving bits-back coding algorithms from tighter variational bounds. The key idea is to exploit extended space representations of Monte Carlo estimators of the marginal likelihood. Naively applied, our schemes would require more initial bits than the standard bits-back coder, but we show how to drastically reduce this additional cost with couplings in the latent space. When parallel architectures can be exploited, our coders can achieve better rates than bits-back with little additional cost. We demonstrate improved lossless compression rates in a variety of settings, especially in out-of-distribution or sequential data compression.

Download the Paper

AUTHORS

Written by

Yangjun Ruan

Karen Ullrich

Daniel Severo

James Townsend

Ashish Khisti

Arnaud Doucet

Alireza Makhzani

Chris J. Maddison

Publisher

ICML 2021

Research Topics

Core Machine Learning

Related Publications

November 18, 2025

Core Machine Learning

Souper-Model: How Simple Arithmetic Unlocks State-of-the-Art LLM Performance

Roberta Raileanu, * Equal authorship, Alexis Audran-Reiss, Amar Budhiraja *, Anton Protopopov, Bhavul Gauri, Despoina Magka, Gaurav Chaurasia, Michael Slater, Shalini Maiti *, Tatiana Shavrina, Yoram Bachrach

November 18, 2025

October 13, 2025

Reinforcement Learni9ng

SPG: Sandwiched Policy Gradient for Masked Diffusion Language Models

Paria Rashidinejad, Cai Zhou, Tommi Jaakkola, DiJia Su, Bo Liu, Feiyu Chen, Chenyu Wang, Shannon Zejiang Shen, Sid Wang, Siyan Zhao, Song Jiang, Yuandong Tian

October 13, 2025

September 24, 2025

NLP

CWM: An Open-Weights LLM for Research on Code Generation with World Models

Chris Cummins, Hugh Leather, Aram Markosyan, Matteo Pagliardini, Tal Remez, Volker Seeker, Marco Selvi, Lingming Zhang, Abhishek Charnalia, Alex Gu, Badr Youbi Idrissi, Christian Keller, Daniel Haziza, David Zhang, Dmitrii Pedchenko, Emily McMilin, Fabian Gloeckle, Felix Kreuk, Francisco Massa, François Fleuret, Gabriel Synnaeve, Gal Cohen, Gallil Maimon, Jacob Kahn, Jade Copet, Jannik Kossen, Jonas Gehring, Jordi Armengol-Estape, Juliette Decugis, Keyur Muzumdar, Kunhao Zheng, Luca Wehrstedt, Maximilian Beck, Michael Hassid, Michel Meyer, Naila Murray, Oren Sultan, Ori Yoran, Pedram Bashiri, Peter O'Hearn, Pierre Chambon, Pierre-Emmanuel Mazaré, Quentin Carbonneaux, Rahul Kindi, Sida Wang, Taco Cohen, Vegard Mella, Yossi Adi, Yuxiang Wei, Zacharias Fisches

September 24, 2025

August 14, 2025

Computer Vision

DINOv3

Timothée Darcet, John Brandt, Julien Mairal, Andrea Vedaldi, Camille Couprie, Cijo Jose, Claire Roberts, Daniel Haziza, Federico Baldassarre, Francisco Massa, Herve Jegou, Huy V. Vo, Jamie Tolan, Jianyuan Wang, Leo Sentana, Luca Wehrstedt, Marc Szafraniec, Maximilian Seitzer, Maxime Oquab, Michaël Ramamonjisoa, Oriane Siméoni, Patrick Labatut, Piotr Bojanowski, Seungeun Yi, Theo Moutakanni, Vasil Khalidov

August 14, 2025

December 07, 2020

Core Machine Learning

Adversarial Example Games

Avishek Joey Bose, Gauthier Gidel, Andre Cianflone, Pascal Vincent, Simon Lacoste-Julien, William L. Hamilton

December 07, 2020

November 03, 2020

Core Machine Learning

Robust Embedded Deep K-means Clustering

Rui Zhang, Hanghang Tong Yinglong Xia, Yada Zhu

November 03, 2020

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.