December 11, 2023
Audio is an essential part of our life, but creating it often requires expertise and is time-consuming. Research communities have made great progress over the past year advancing the performance of large scale audio generative models for a single modality (speech, sound, or music) through adopting more powerful generative models and scaling data. However, these models lack controllability in several aspects: speech generation models cannot synthesize novel styles based on text description and are limited on domain coverage such as outdoor environments; sound generation models only provide coarse-grained control based on descriptions like “a person speaking” and would only generate mumbling human voices. This paper presents Audiobox, a unified model based on flow-matching that is capable of generating various audio modalities. We design description-based and example-based prompting to enhance controllability and unify speech and sound generation paradigms. We allow transcript, vocal, and other audio styles to be controlled independently when generating speech. To improve model generalization with limited labels, we adapt a self-supervised infilling objective to pre-train on large quantities of unlabeled audio. Audiobox sets new benchmarks on speech and sound generation (0.745 similarity on Librispeech for zero-shot TTS; 0.77 FAD on AudioCaps for text-to-sound) and unlocks new methods for generating audio with novel vocal and acoustic styles. We further integrate Bespoke Solvers, which speeds up generation by over 25 times compared to the default ODE solver for flow-matching, without loss of performance on several tasks.
Written by
Akinniyi Akinyemi
Alice Rakotoarison
Andros Tjandra
Apoorv Vyas
Baishan Guo
Bapi Akula
Bowen Shi
Brian Ellis
Ivan Cruz
Jeff Wang
Jiemin Zhang
Mary Williamson
Rashel Moritz
Robbie Adkins
William Ngan
Xinyue Zhang
Yael Yungster
Yi-Chiao Wu
Publisher
arXiv
Research Topics
March 17, 2026
Omnilingual SONAR Team, Ioannis Tsiamas, Yen Meng, Vivek Iyer, Guillem Ramirez, Jaehyeong Jo, Alexandre Mourachko, Yu-An Chung, Artyom Kozhevnikov, Belen Alastruey, Christophe Ropers, David Dale, Holger Schwenk, João Maria Janeiro, Kevin Heffernan, Loic Barrault, Marta R. Costa-jussa, Paul-Ambroise Duquenne, Pere Lluís Huguet Cabot
March 17, 2026
December 16, 2025
Yi-Chiao Wu, Julius Richter, Andros Tjandra, Ann Lee, Apoorv Vyas, Bowen Shi, Christoph Feichtenhofer, Helin Wang, John Hoffman, Luya Gao, Matt Le, Piotr Dollar, Sanyuan Chen, Wei-Ning Hsu
December 16, 2025
December 16, 2025
Heng-Jui Chang, Cheng-Fu Yang, Julius Richter, Ann Lee, Apoorv Vyas, Bernie Huang, Christoph Feichtenhofer, Luya Gao, Matt Le, Piotr Dollar, Sanyuan Chen, Wei-Ning Hsu
December 16, 2025
November 10, 2025
Omnilingual ASR team, Skyler Wang, Ife Adebara, Michael Auli, Kaushik Ram Sadagopan, Zheng-Xin Yong, Albert Ventayol-Boada, Alexandre Mourachko, Alexander Erben, Yu-An Chung, Arina Turkatenko, Artyom Kozhevnikov, Caley Drooff, Can Balioglu, Chierh Cheng, Christophe Ropers, Cynthia Gao, Gabriel Mejia Gonzalez, Gil Keren, Jean Maillard, Joe Chuang, Kehan Lyu, Kevin Chan, Mark Duppenthaler, Mary Williamson, Matthew Setzler, Paul-Ambroise Duquenne, Rashel Moritz, Safiyyah Saleem, Sagar Miglani, Shireen Yates, Vineel Pratap, Yen Meng
November 10, 2025

Our approach
Latest news
Foundational models