NLP

Luna: Linear Unified Nested Attention

October 26, 2021

Abstract

The quadratic computational and memory complexities of the Transformer’s at-tention mechanism have limited its scalability for modeling long sequences. Inthis paper, we propose Luna, a linear unified nested attention mechanism thatapproximates softmax attention withtwo nested linear attention functions, yieldingonly linear (as opposed to quadratic) time and space complexity. As compared toa more traditional attention mechanism, Luna introduces an additional sequencewith a fixed length as input and an additional corresponding output, which allowsLuna to perform attention operation linearly, while also storing adequate contextualinformation. We perform extensive evaluations on three benchmarks of sequencemodeling tasks: long-context sequence modeling, neural machine translation andmasked language modeling for large-scale pretraining. Competitive or even betterexperimental results demonstrate both the effectiveness and efficiency of Lunacompared to a variety of strong baseline methods including the full-rank attentionand other efficient sparse and dense attention methods. The implementation of ourmodel is available at https://github.com/XuezheMax/fairseq-apollo

Download the Paper

AUTHORS

Written by

Xuezhe Ma

Xiang Kong

Sinong Wang

Chunting Zhou

Jonathan May

Hao Ma

Luke Zettlemoyer

Publisher

NeurIPS

Related Publications

May 14, 2025

HUMAN & MACHINE INTELLIGENCE

SPEECH & AUDIO

Emergence of Language in the Developing Brain

Linnea Evanson, Christine Bulteau, Mathilde Chipaux, Georg DorfmĂĽller, Sarah Ferrand-Sorbets, Emmanuel Raffo, Sarah Rosenberg, Pierre Bourdillon, Jean Remi King

May 14, 2025

April 25, 2025

RESEARCH

NLP

ReasonIR: Training Retrievers for Reasoning Tasks

Rulin Shao, Qiao Rui, Varsha Kishore, Niklas Muennighoff, Victoria Lin, Daniela Rus, Bryan Kian Hsiang Low, Sewon Min, Scott Yih, Pang Wei Koh, Luke Zettlemoyer

April 25, 2025

April 17, 2025

HUMAN & MACHINE INTELLIGENCE

CONVERSATIONAL AI

Collaborative Reasoner: Self-improving Social Agents with Synthetic Conversations

Ansong Ni, Ruta Desai, Yang Li, Xinjie Lei, Dong Wang, Ramya Raghavendra, Gargi Ghosh, Daniel Li (FAIR), Asli Celikyilmaz

April 17, 2025

April 04, 2025

NLP

CORE MACHINE LEARNING

Multi-Token Attention

Olga Golovneva, Tianlu Wang, Jason Weston, Sainbayar Sukhbaatar

April 04, 2025

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.