August 14, 2023
State space models (SSMs) have recently shown promising results on small-scale sequence and language modelling tasks, rivalling and outperforming many attention-based approaches. In this paper, we propose a multi-head state space (MH-SSM) architecture equipped with special gating mechanisms, where parallel heads are taught to learn local and global temporal dynamics on sequence data. As a drop-in replacement for multi-head attention in transformer encoders, this new model significantly outperforms the transformer transducer on the LibriSpeech speech recognition corpus. Furthermore, we augment the transformer block with MH-SSMs layers, referred to as the Stateformer, achieving state-of-the-art performance on the LibriSpeech task, with word error rates of 1.76%/4.37% on the development and 1.91%/4.36% on the test sets without using an external language model.
Written by
Yassir Fathullah
Chunyang Wu
Yuan Shangguan (June)
Wenhan Xiong
Jay Mahadeokar
Chunxi Liu
Mark Gales
Ozlem Kalinli
Publisher
Interspeech
April 22, 2024
Vasu Sharma *, Karthik Padthe *, Newsha Ardalani, Kushal Tirumala, Russ Howes, Hu Xu, Bernie Huang, Daniel Li (FAIR), Armen Aghajanyan, Gargi Ghosh, Luke Zettlemoyer
April 22, 2024
April 14, 2024
Heng-Jui Chang, Ning Dong (AI), Ruslan Mavlyutov, Sravya Popuri, Andy Chung
April 14, 2024
April 05, 2024
Suyu Ge, Chunting Zhou, Rui Hou, Madian Khabsa, Yi-Chia Wang, Qifan Wang, Jiawei Han, Yuning Mao
April 05, 2024
March 05, 2024
Alex Liu, Matt Le, Apoorv Vyas, Bowen Shi, Andros Tjandra, Wei-Ning Hsu
March 05, 2024
Product experiences
Foundational models
Product experiences
Latest news
Foundational models