ESPnet-SE++: Speech Enhancement for Robust Speech Recognition, Translation, and Understanding

July 26, 2022


This paper presents recent progress on integrating speech separation and enhancement (SSE) into the ESPnet toolkit. Compared with the previous ESPnet-SE work, numerous features have been added, including recent state-of-the-art speech enhancement models with their respective training and evaluation recipes. Importantly, a new interface has been designed to flexibly combine speech enhancement front-ends with other tasks, including automatic speech recognition (ASR), speech translation (ST), and spoken language understanding (SLU). To showcase such integration, we performed experiments on carefully designed synthetic datasets for noisy-reverberant multichannel ST and SLU tasks, which can be used as benchmark corpora for future research. In addition to these new tasks, we also use CHiME-4 and WSJ0-2Mix to benchmark multi and single-channel SE approaches. Results show that the integration of SE front-ends with back-end tasks is a promising research direction even for tasks besides ASR, especially in the multi-channel scenario. The code is available online at The multichannel ST and SLU datasets, which are another contribution of this work, are released on HuggingFace.

Download the Paper


Written by

Zhaoheng Ni

Brian Yan

Chenda Li

Robin Scheibler

Samuele Cornell

Shinji Watanabe

Wangyou Zhang

Xuankai Chang

Yanming Qian

Yen-Ju Lu

Yoshiki Masuyama

Yu Tsao

Zhong-Qiu Wang



Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.