July 20, 2018
In this paper we continue the line of work where neural machine translation training is used to produce joint cross-lingual fixed-dimensional sentence embeddings. In this framework we introduce a simple method of adding a loss to the learning objective which penalizes distance between representations of bilingually aligned sentences. We evaluate cross-lingual transfer using two approaches, cross-lingual similarity search on an aligned corpus (Europarl) and cross-lingual document classification on a recently published benchmark Reuters corpus, and we find the similarity loss significantly improves performance on both. Our cross-lingual transfer performance is competitive with stateof-the-art, even while there is potential to further improve by investing in a better inlanguage baseline. Our results are based on a set of 6 European languages.
Publisher
RepL4NLP Workshop
July 23, 2024
Llama team
July 23, 2024
June 25, 2024
Min-Jae Hwang, Ilia Kulikov, Benjamin Peloquin, Hongyu Gong, Peng-Jen Chen, Ann Lee
June 25, 2024
June 05, 2024
Robin San Romin, Pierre Fernandez, Hady Elsahar, Alexandre Deffosez, Teddy Furon, Tuan Tran
June 05, 2024
May 24, 2024
May 24, 2024
Product experiences
Foundational models
Product experiences
Latest news
Foundational models