April 30, 2018
We present a new neural text to speech (TTS) method that is able to transform text to speech in voices that are sampled in the wild. Unlike other systems, our solution is able to deal with unconstrained voice samples and without requiring aligned phonemes or linguistic features. The network architecture is simpler than those in the existing literature and is based on a novel shifting buffer working memory. The same buffer is used for estimating the attention, computing the output audio, and for updating the buffer itself. The input sentence is encoded using a context-free lookup table that contains one entry per character or phoneme. The speakers are similarly represented by a short vector that can also be fitted to new identities, even with only a few samples. Variability in the generated speech is achieved by priming the buffer prior to generating the audio. Experimental results on several datasets demonstrate convincing capabilities, making TTS accessible to a wider range of applications. In order to promote reproducibility, we release our source code and models.
Research Topics
November 19, 2020
Angela Fan, Aleksandra Piktus, Antoine Bordes, Fabio Petroni, Guillaume Wenzek, Marzieh Saeidi, Sebastian Riedel, Andreas Vlachos
November 19, 2020
November 09, 2020
Angela Fan
November 09, 2020
October 26, 2020
Xian Li, Asa Cooper Stickland, Xiang Kong, Yuqing Tang
October 26, 2020
October 25, 2020
Yossef Mordechay Adi, Bhiksha Raj, Felix Kreuk, Joseph Keshet, Rita Singh
October 25, 2020
December 11, 2019
Eliya Nachmani, Lior Wolf
December 11, 2019
April 30, 2018
Yedid Hoshen, Lior Wolf
April 30, 2018
April 30, 2018
Yaniv Taigman, Lior Wolf, Adam Polyak, Eliya Nachmani
April 30, 2018
July 11, 2018
Eliya Nachmani, Adam Polyak, Yaniv Taigman, Lior Wolf
July 11, 2018
Foundational models
Latest news
Foundational models