October 26, 2021
The quadratic computational and memory complexities of the Transformer’s at-tention mechanism have limited its scalability for modeling long sequences. Inthis paper, we propose Luna, a linear unified nested attention mechanism thatapproximates softmax attention withtwo nested linear attention functions, yieldingonly linear (as opposed to quadratic) time and space complexity. As compared toa more traditional attention mechanism, Luna introduces an additional sequencewith a fixed length as input and an additional corresponding output, which allowsLuna to perform attention operation linearly, while also storing adequate contextualinformation. We perform extensive evaluations on three benchmarks of sequencemodeling tasks: long-context sequence modeling, neural machine translation andmasked language modeling for large-scale pretraining. Competitive or even betterexperimental results demonstrate both the effectiveness and efficiency of Lunacompared to a variety of strong baseline methods including the full-rank attentionand other efficient sparse and dense attention methods. The implementation of ourmodel is available at https://github.com/XuezheMax/fairseq-apollo
Publisher
NeurIPS
Research Topics
November 20, 2024
Igor Fedorov, Kate Plawiak, Lemeng Wu, Tarek Elgamal, Naveen Suda, Eric Smith, Hongyuan Zhan, Jianfeng Chi, Yuriy Hulovatyy, Kimish Patel, Zechun Liu, Yangyang Shi, Tijmen Blankevoort, Mahesh Pasupuleti, Bilge Soran, Zacharie Delpierre Coudert, Rachad Alao, Raghuraman Krishnamoorthi, Vikas Chandra
November 20, 2024
November 19, 2024
Shehzaad Dhuliawala, Ilia Kulikov, Ping Yu, Asli Celikyilmaz, Jason Weston, Sainbayar Sukhbaatar, Jack Lanchantin
November 19, 2024
November 14, 2024
Zhaoyu Li, Jialiang Sun, Logan Murphy, Qidong Su, Zenan Li, Xian Zhang, Kaiyu Yang, Xujie Si
November 14, 2024
October 04, 2024
Bandhav Veluri, Benjamin Peloquin, Bokai Yu, Hongyu Gong, Shyam Gollakota
October 04, 2024
Foundational models
Latest news
Foundational models