NLP

Luna: Linear Unified Nested Attention

October 26, 2021

Abstract

The quadratic computational and memory complexities of the Transformer’s at-tention mechanism have limited its scalability for modeling long sequences. Inthis paper, we propose Luna, a linear unified nested attention mechanism thatapproximates softmax attention withtwo nested linear attention functions, yieldingonly linear (as opposed to quadratic) time and space complexity. As compared toa more traditional attention mechanism, Luna introduces an additional sequencewith a fixed length as input and an additional corresponding output, which allowsLuna to perform attention operation linearly, while also storing adequate contextualinformation. We perform extensive evaluations on three benchmarks of sequencemodeling tasks: long-context sequence modeling, neural machine translation andmasked language modeling for large-scale pretraining. Competitive or even betterexperimental results demonstrate both the effectiveness and efficiency of Lunacompared to a variety of strong baseline methods including the full-rank attentionand other efficient sparse and dense attention methods. The implementation of ourmodel is available at https://github.com/XuezheMax/fairseq-apollo

Download the Paper

AUTHORS

Written by

Xuezhe Ma

Xiang Kong

Sinong Wang

Chunting Zhou

Jonathan May

Hao Ma

Luke Zettlemoyer

Publisher

NeurIPS

Related Publications

February 10, 2026

NLP

AIRS-Bench: a Suite of Tasks for Frontier AI Research Science Agents

Alisia Lupidi, Bhavul Gauri, Thomas Simon Foster, Bassel Al Omari, Despoina Magka, Alberto Pepe, Alexis Audran-Reiss, Muna Aghamelu, Nicolas Baldwin, Lucia Cipolina-Kun, Jean-Christophe Gagnon-Audet, Chee Hau Leow, Sandra Lefdal, Hossam Mossalam, Abhinav Moudgil, Saba Nazir, Emanuel Tewolde, Isabel Urrego, Jordi Armengol Estape, Amar Budhiraja, Gaurav Chaurasia, Abhishek Charnalia, Derek Dunfield, Karen Hambardzumyan, Daniel Izcovich, Martin Josifoski, Ishita Mediratta, Kelvin Niu, Parth Pathak, Michael Shvartsman, Edan Toledo, Anton Protopopov, Roberta Raileanu, Alexander Miller, Tatiana Shavrina, Jakob Foerster, Yoram Bachrach

February 10, 2026

December 26, 2025

REINFORCEMENT LEARNING

NLP

Safety Alignment of LMs via Non-cooperative Games

Anselm Paulus, Ilia Kulikov, Brandon Amos, Remi Munos, Ivan Evtimov, Kamalika Chaudhuri, Arman Zharmagambetov

December 26, 2025

December 18, 2025

NLP

How Good is Post-Hoc Watermarking With Language Model Rephrasing?

Pierre Fernandez, Tom Sander, Hady Elsahar, Hongyan Chang, Tomáš Souček, Sylvestre Rebuffi, Valeriu Lacatusu, Tuan Tran, Alexandre Mourachko

December 18, 2025

December 12, 2025

NLP

COMPUTER VISION

Text-Guided Semantic Image Encoder

Raghuveer Thirukovalluru, Xiaochuang Han, Bhuwan Dhingra, Emily Dinan, Maha Elbayad

December 12, 2025

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.