May 06, 2022
Pre-training on larger datasets with ever increasing model size is now a proven recipe for increased performance across almost all NLP tasks. A notable exception is information retrieval, where additional pre-training has so far failed to produce convincing results. We show that, with the right pre-training setup, this barrier can be overcome. We demonstrate this by pre-training large bi-encoder models on 1) a recently released set of 65 million synthetically generated questions, and 2) 200 million post-comment pairs from a preexisting dataset of Reddit conversations made available by pushshift.io. We evaluate on a set of information retrieval and dialogue retrieval benchmarks, showing substantial improvements over supervised baselines.
Written by
Aleksandra Piktus
Anchit Gupta
Kushal Lakhotia
Patrick Lewis
Sebastian Riedel
Sonal Gupta
Vladimir Karpukhin
Yashar Mehdad
Publisher
ACL rolling review
Research Topics
Foundational models
Latest news
Foundational models