HUMAN & MACHINE INTELLIGENCE

RESEARCH

Unified Vision–Language Modeling via Concept Space Alignment

February 27, 2026

Abstract

We introduce v-Sonar, a vision–language embedding space extended from the text-only embedding space Sonar (Omnilingual Embeddings Team et al., 2026), which supports 1500 text languages and 177 speech languages. To construct v-Sonar, we propose a post-hoc alignment pipeline that maps the representations of an existing vision encoder into the Sonar space. We thoroughly evaluate v-Sonar and show that its embeddings achieve competitive performance on text-to-video retrieval. Equipped with the Sonar text decoder, v-Sonar further surpasses state-of-the-art vision–language models on video captioning tasks, including Dream-1k (Bleu 24.3 vs. 19.6) and Vatex (Bleu 45.0 vs. 41.5). Leveraging v-Sonar, we first demonstrate that the Large Concept Model (LCM; LCM team et al. 2024) operating in Sonar and trained with English text only, can perform both single- and multi-visual concept understanding in a zero-shot manner. Finally, we introduce v-LCM, which extends the LCM with vision–language instruction tuning. v-LCM encodes vision and language inputs into an unified sequence of latent embeddings via v-Sonar and Sonar, and it is trained with the same latent diffusion objective for next-embedding prediction as in LCM’s text-only pre-training. Experiments on a large-scale multilingual and -modal instruction–tuning data mixture highlight the potential of v-LCM: v-LCM matches state-of-the-art vision-language models on tasks covering image/video captioning and question answering, while significantly outperforming them across 61 rich- to low-resource languages out of all 62 tested languages.

Download the Paper

AUTHORS

Written by

Yifu Qiu

Paul-Ambroise Duquenne

Holger Schwenk

Publisher

arXiv, ICLR

Related Publications

February 26, 2026

CONVERSATIONAL AI

RESEARCH

Learning Personalized Agents from Human Feedback

Kaiqu Liang, Julia Kruk, Shengyi Qian, Xianjun Yang, Shengjie Bi, Shaoliang Nie, Michael Zhang, Lijuan Liu, Jaime Fernández Fisac, Shuyan Zhou, Saghar Hosseini

February 26, 2026

February 11, 2026

RESEARCH

COMPUTER VISION

UniT: Unified Multimodal Chain-of-Thought Test-time Scaling

Leon Liangyu Chen, Haoyu Ma, Zhipeng Fan, Ziqi Huang, Animesh Sinha, Xiaoliang Dai, Jialiang Wang, Zecheng He, Jianwei Yang, Chunyuan Li, Junzhe Sun, Chu Wang, Serena Yeung-Levy, Felix Juefei-Xu

February 11, 2026

December 18, 2025

RESEARCH

COMPUTER VISION

Pixel Seal: Adversarial-only training for invisible image and video watermarking

Tomáš Souček, Pierre Fernandez, Hady Elsahar, Sylvestre Rebuffi, Valeriu Lacatusu, Tuan Tran, Tom Sander, Alexandre Mourachko

December 18, 2025

November 19, 2025

RESEARCH

COMPUTER VISION

SAM 3: Segment Anything with Concepts

Nicolas Carion, Laura Gustafson, Yuan-Ting Hu, Shoubhik Debnath, Ronghang Hu, Didac Suris Coll-Vinent, Chaitanya Ryali, Kalyan Vasudev Alwala, Haitham Khedr, Andrew Huang, Jie Lei, Tengyu Ma, Baishan Guo, Arpit Kalla, Markus Marks, Joseph Greer, Meng Wang, Peize Sun, Roman Rädle, Triantafyllos Afouras, Effrosyni Mavroudi, Katherine Xu, Tsung-Han Wu, Yu Zhou, Liliane Momeni, Rishi Hazra, Shuangrui Ding, Sagar Vaze, Francois Porcher, Feng Li, Siyuan Li, Aishwarya Kamath, Ho Kei Cheng, Piotr Dollar, Nikhila Ravi, Kate Saenko, Pengchuan Zhang, Christoph Feichtenhofer

November 19, 2025

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.