July 15, 2020
Non-autoregressive machine translation models significantly speed up decoding by allowing for parallel prediction of the entire target sequence. However, modeling word order is more challenging due to the lack of autoregressive factors in the model. This difficultly is compounded during training with cross entropy loss, which can highly penalize small shifts in word order. In this paper, we propose aligned cross entropy (AXE) as an alternate loss function for training of non-autoregressive models. AXE uses a differentiable dynamic program to assign loss based on the best possible monotonic alignment between target tokens and model predictions. AXE-based non-monotonic training of conditional masked language models (CMLMs) improves performance by 3 and 5 BLEU points respectively on WMT 16 EN-RO and WMT 14 EN-DE. It also significantly outperforms the state-of-the-art non-autoregressive models on a range of translation benchmarks.
November 28, 2022
Nicolas Ballas, Bernhard Schölkopf, Chris Pal, Francesco Locatello, Li Erran, Martin Weiss, Nasim Rahaman, Yoshua Bengio
November 28, 2022
November 27, 2022
Andrea Tirinzoni, Aymen Al Marjani, Emilie Kaufmann
November 27, 2022
November 23, 2022
Tal Hassner, Cuong N. Nguyen, Cuong V. Nguyen, Lam Si Tung Ho, Vu Dinh
November 23, 2022
November 16, 2022
Kushal Tirumala, Aram H. Markosyan, Armen Aghajanyan, Luke Zettlemoyer
November 16, 2022
Foundational models
Latest news
Foundational models