August 7, 2016
Training neural network language models over large vocabularies is computationally costly compared to count-based models such as Kneser-Ney. We present a systematic comparison of neural strategies to represent and train large vocabularies, including softmax, hierarchical softmax, target sampling, noise contrastive estimation and self normalization. We extend self normalization to be a proper estimator of likelihood and introduce an efficient variant of softmax. We evaluate each method on three popular benchmarks, examining performance on rare words, the speed/accuracy trade-off and complementarity to Kneser-Ney.
Research Topics
November 19, 2020
Angela Fan, Aleksandra Piktus, Antoine Bordes, Fabio Petroni, Guillaume Wenzek, Marzieh Saeidi, Sebastian Riedel, Andreas Vlachos
November 19, 2020
November 09, 2020
Angela Fan
November 09, 2020
October 26, 2020
Xian Li, Asa Cooper Stickland, Xiang Kong, Yuqing Tang
October 26, 2020
October 25, 2020
Yossef Mordechay Adi, Bhiksha Raj, Felix Kreuk, Joseph Keshet, Rita Singh
October 25, 2020
December 11, 2019
Eliya Nachmani, Lior Wolf
December 11, 2019
April 30, 2018
Yedid Hoshen, Lior Wolf
April 30, 2018
April 30, 2018
Yaniv Taigman, Lior Wolf, Adam Polyak, Eliya Nachmani
April 30, 2018
July 11, 2018
Eliya Nachmani, Adam Polyak, Yaniv Taigman, Lior Wolf
July 11, 2018
Foundational models
Latest news
Foundational models