April 30, 2019
We present a method for converting any voice to a target voice. The method is based on a WaveNet autoencoder, with the addition of a novel attention component that supports the modification of timing between the input and the output samples. Training the attention is done in an unsupervised way, by teaching the neural network to recover the original timing from an artificially modified one. Adding a generic voice robot, which we convert to the target voice, we present a robust Text To Speech pipeline that is able to train without any transcript. Our experiments show that the proposed method is able to recover the timing of the speaker and that the proposed pipeline provides a competitive Text To Speech method.
Research Topics
November 19, 2020
Angela Fan, Aleksandra Piktus, Antoine Bordes, Fabio Petroni, Guillaume Wenzek, Marzieh Saeidi, Sebastian Riedel, Andreas Vlachos
November 19, 2020
November 09, 2020
Angela Fan
November 09, 2020
October 26, 2020
Xian Li, Asa Cooper Stickland, Xiang Kong, Yuqing Tang
October 26, 2020
October 25, 2020
Yossef Mordechay Adi, Bhiksha Raj, Felix Kreuk, Joseph Keshet, Rita Singh
October 25, 2020
December 11, 2019
Eliya Nachmani, Lior Wolf
December 11, 2019
April 30, 2018
Yedid Hoshen, Lior Wolf
April 30, 2018
April 30, 2018
Yaniv Taigman, Lior Wolf, Adam Polyak, Eliya Nachmani
April 30, 2018
July 11, 2018
Eliya Nachmani, Adam Polyak, Yaniv Taigman, Lior Wolf
July 11, 2018
Foundational models
Latest news
Foundational models