October 26, 2020
We conduct in this work an evaluation study comparing offline and online neural machine translation architectures. Two sequence-to-sequence models: convolutional Pervasive Attention (Elbayad et al., 2018) and attention-based Transformer (Vaswani et al., 2017) are considered. We investigate, for both architectures, the impact of online decoding constraints on the translation quality through a carefully designed human evaluation on English-German and German-English language pairs, the latter being particularly sensitive to latency constraints. The evaluation results allow us to identify the strengths and shortcomings of each model when we shift to the online setup.
Written by
Jakob Verbeek
Emmanuelle Esperança-Rodier
Francis Brunet Manquat
Laurent Besacier
Maha Elbayad
Michael Ustaszewski
Publisher
COLING
July 23, 2024
Llama team
July 23, 2024
June 25, 2024
Elena Voita, Javier Ferrando Monsonis, Christoforos Nalmpantis
June 25, 2024
June 25, 2024
Min-Jae Hwang, Ilia Kulikov, Benjamin Peloquin, Hongyu Gong, Peng-Jen Chen, Ann Lee
June 25, 2024
June 14, 2024
Sheng-Chieh Lin, Akari Asai, Minghan Li, Barlas Oguz, Jimmy Lin, Scott Yih, Xilun Chen
June 14, 2024
Product experiences
Foundational models
Product experiences
Latest news
Foundational models