July 18, 2021
Latent variable models have been successfully applied in lossless compression with the bits-back coding algorithm. However, bits-back suffers from an increase in the bit rate equal to the KL divergence between the approximate posterior and the true posterior. In this paper, we show how to remove this gap asymptotically by deriving bits-back coding algorithms from tighter variational bounds. The key idea is to exploit extended space representations of Monte Carlo estimators of the marginal likelihood. Naively applied, our schemes would require more initial bits than the standard bits-back coder, but we show how to drastically reduce this additional cost with couplings in the latent space. When parallel architectures can be exploited, our coders can achieve better rates than bits-back with little additional cost. We demonstrate improved lossless compression rates in a variety of settings, especially in out-of-distribution or sequential data compression.
Written by
Yangjun Ruan
Karen Ullrich
Daniel Severo
James Townsend
Ashish Khisti
Arnaud Doucet
Alireza Makhzani
Chris J. Maddison
Publisher
ICML 2021
Research Topics
Core Machine Learning
November 27, 2022
Nicolas Ballas, Bernhard Schölkopf, Chris Pal, Francesco Locatello, Li Erran, Martin Weiss, Nasim Rahaman, Yoshua Bengio
November 27, 2022
November 16, 2022
Kushal Tirumala, Aram H. Markosyan, Armen Aghajanyan, Luke Zettlemoyer
November 16, 2022
November 08, 2022
Ari Morcos, Shashank Shekhar, Surya Ganguli, Ben Sorscher, Robert Geirhos
November 08, 2022
August 08, 2022
Ashkan Yousefpour, Akash Bharadwaj, Alex Sablayrolles, Graham Cormode, Igor Shilov, Ilya Mironov, Jessica Zhao, John Nguyen, Karthik Prasad, Mani Malek, Sayan Ghosh
August 08, 2022
December 07, 2020
Avishek Joey Bose, Gauthier Gidel, Andre Cianflone, Pascal Vincent, Simon Lacoste-Julien, William L. Hamilton
December 07, 2020
November 03, 2020
Rui Zhang, Hanghang Tong Yinglong Xia, Yada Zhu
November 03, 2020
Foundational models
Latest news
Foundational models