SPEECH & AUDIO

NLP

Cocktail HuBERT: Generalized Self-Supervised Pre-Training for Mixture and Single-Source Speech

March 27, 2023

Abstract

Self-supervised learning leverages unlabeled data effectively, improving label efficiency and generalization to domains without labeled data. While recent work has studied generalization to more acoustic/linguistic domains, languages, and modalities, these investigations are limited to single-source speech with one primary speaker in the recording. This paper presents Cocktail HuBERT, a self-supervised learning framework that generalizes to mixture speech using a masked pseudo source separation objective. This objective encourages the model to identify the number of sources, separate and understand the context, and infer the content of masked regions represented as discovered units. Cocktail HuBERT outperforms state-of-the-art results with 69% lower WER on multispeaker ASR, 31% lower DER on diarization, and is competitive on single- and multi-speaker tasks from SUPERB.

Download the Paper

Related Publications

May 24, 2024

SPEECH & AUDIO

NLP

DOC-RAG: ASR Language Model Personalization with Domain-Distributed Co-occurrence Retrieval Augmentation

Zhe Liu

May 24, 2024

May 06, 2024

CONVERSATIONAL AI

NLP

GAIA: a benchmark for general AI assistants

Gregoire Mialon, Yann LeCun, Thomas Scialom, Clémentine Fourrier, Thomas Wolf

May 06, 2024

April 22, 2024

NLP

Text Quality-Based Pruning for Efficient Training of Language Models

Vasu Sharma *, Karthik Padthe *, Newsha Ardalani, Kushal Tirumala, Russ Howes, Hu Xu, Bernie Huang, Daniel Li (FAIR), Armen Aghajanyan, Gargi Ghosh, Luke Zettlemoyer

April 22, 2024

April 14, 2024

SPEECH & AUDIO

NLP

Multi-task Learning for Front-end Text Processing in TTS

Yun Wang (Speech), Arthur Hinsvark, Qing He, Shun Zhang, Wonjune Kang

April 14, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.