Research

NLP

Unsupervised Hyper-alignment for Multilingual Word Embeddings

May 17, 2019

Abstract

We consider the problem of aligning continuous word representations, learned in multiple languages, to a common space. It was recently shown that, in the case of two languages, it is possible to learn such a mapping without supervision. This paper extends this line of work to the problem of aligning multiple languages to a common space. A solution is to independently map all languages to a pivot language. Unfortunately, this degrades the quality of indirect word translation. We thus propose a novel formulation that ensures composable mappings, leading to better alignments. We evaluate our method by jointly aligning word vectors in eleven languages, showing consistent improvement with indirect mappings while maintaining competitive performance on direct word translation.

Download the Paper

Related Publications

September 10, 2019

NLP

Bridging the Gap Between Relevance Matching and Semantic Matching for Short Text Similarity Modeling | Facebook AI Research

Jinfeng Rao, Linqing Liu, Yi Tay, Wei Yang, Peng Shi, Jimmy Lin

September 10, 2019

May 17, 2019

NLP

Unsupervised Hyper-alignment for Multilingual Word Embeddings | Facebook AI Research

Jean Alaux, Edouard Grave, Marco Cuturi, Armand Joulin

May 17, 2019

July 27, 2019

NLP

Unsupervised Question Answering by Cloze Translation | Facebook AI Research

Patrick Lewis, Ludovic Denoyer, Sebastian Riedel

July 27, 2019

August 01, 2019

NLP

Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives | Facebook AI Research

Yi Tay, Shuohang Wang, Luu Anh Tuan, Jie Fu, Minh C. Phan, Xingdi Yuan, Jinfeng Rao, Siu Cheung Hui, Aston Zhang

August 01, 2019

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.