October 26, 2020
We conduct in this work an evaluation study comparing offline and online neural machine translation architectures. Two sequence-to-sequence models: convolutional Pervasive Attention (Elbayad et al., 2018) and attention-based Transformer (Vaswani et al., 2017) are considered. We investigate, for both architectures, the impact of online decoding constraints on the translation quality through a carefully designed human evaluation on English-German and German-English language pairs, the latter being particularly sensitive to latency constraints. The evaluation results allow us to identify the strengths and shortcomings of each model when we shift to the online setup.
Written by
Jakob Verbeek
Emmanuelle Esperança-Rodier
Francis Brunet Manquat
Laurent Besacier
Maha Elbayad
Michael Ustaszewski
Publisher
COLING
September 05, 2024
Chunting Zhou, Lili Yu, Arun Babu, Kushal Tirumala, Michihiro Yasunaga, Leonid Shamis, Jacob Kahn, Luke Zettlemoyer, Omer Levy, Xuezhe Ma
September 05, 2024
August 20, 2024
Ashish Shenoy, Yichao Lu, Srihari Jayakumar, Debojeet Chatterjee, Mohsen Moslehpour, Pierce Chuang, Abhay Harpale, Vikas Bhardwaj, Di Xu (SWE), Shicong Zhao, Ankit Ramchandani, Luna Dong, Anuj Kumar
August 20, 2024
August 11, 2024
Igor Tufanov, Karen Hambardzumyan, Javier Ferrando, Lena Voita
August 11, 2024
August 11, 2024
Marta R. Costa-jussa, Mariano Coria Meglioli, Pierre Andrews, David Dale, Kae Hansanti, Elahe Kalbassi, Christophe Ropers, Carleigh Wood
August 11, 2024
Foundational models
Latest news
Foundational models