August 27, 2019
In April, Facebook AI launched three calls for research proposals—robust deep learning for natural language processing (NLP), computationally efficient NLP, and neural machine translation for low-resource languages—to support cutting-edge research in NLP and machine translation. Facebook AI received 115 proposals from 35 different countries for the three requests for proposals. From this pool, we selected 11 winning proposals and are announcing them below.
These research awards in NLP and machine translation were launched as a continuation of our long-term goal of supporting open research within the NLP community and strengthening collaboration between Facebook and academia.
With this goal in mind, Facebook AI is also excited to announce the launch of the AI Language Research Consortium, a community of partners working together to advance priority research areas in NLP. The consortium will foster close collaboration and partnership, enabling deeper exploration on topics such as neural machine translation, robust deep NLP, computationally efficient NLP, representation learning, content understanding, dialog systems, information extraction, sentiment analysis, summarization, data collection and cleaning, and speech translation.
In addition to collaborating with Facebook researchers on multiyear projects and publications, consortium members will receive funding for their research, participate in an annual research workshop where they can present their latest work, and have access to auxiliary events at major NLP conferences.
Facebook believes strongly in open science, and we hope the consortium, as well as these research awards in NLP and machine translation, will help accelerate research in the NLP community. Thank you to everyone who submitted a proposal, and congratulations to the winners.
Learning to screen: Accelerating training and inference for NLP models
Cho-Jui Hsieh, UCLA
Efficient deployment of NLP models on edge devices
Song Han, MIT
Integrated triaging and anytime prediction for fast reading comprehension
Yoav Artzi, Cornell University
Sparse latent representations for efficient NLP
Jaap Kamps, University of Amsterdam
Neural machine translation for low-resource languages
Crosswalk: Text + graphs = better crosslingual embeddings
Robert West, EPFL
Robust neural MT from visual representations
Matt Post, Johns Hopkins University
Self-supervised neural machine translation
Cristina España i Bonet, Deutsches Forschungszentrum für Künstliche Intelligenz
Fair adversarial tasks for natural language understanding
Christopher Potts, Stanford University
Robustifying NLP by exploiting invariances learned via human interaction
Zachary Lipton, Carnegie Mellon University
Meta learning with distributional signatures
Regina Barzilay, MIT
Robust and fair dialectal NLP via unsupervised social media LM transfer
Brendan O'Connor and Mohit Iyyer, University of Massachusetts Amherst
Foundational models
Latest news
Foundational models