COMPUTER VISION

PerceptionLM: Open-Access Data and Models for Detailed Visual Understanding

April 17, 2025

Abstract

Vision-language models are integral to computer vision research, yet many high-performing models remain closed-source, obscuring their data, design and training recipe. The research community has responded by using distillation from black-box models to label training data, achieving strong benchmark results, at the cost of measurable scientific progress. However, without knowing the details of the teacher model and its data sources, scientific progress remains difficult to measure. In this paper, we study building a Perception Language Model (PLM) in a fully open and reproducible framework for transparent research in image and video understanding. We analyze standard training pipelines without distillation from proprietary models and explore large-scale synthetic data to identify critical data gaps, particularly in detailed video understanding. To bridge these gaps, we release 2.8M human-labeled instances of fine-grained video question-answer pairs and spatio-temporally grounded video captions. Additionally, we introduce PLM–VideoBench, a suite for evaluating challenging video understanding tasks focusing on the ability to reason about "what", "where", "when", and "how" of a video. We make our work fully reproducible by providing data, training recipes, code & models.

Download the Paper

AUTHORS

Written by

Jang Hyun Cho

Andrea Madotto

Effrosyni Mavroudi

Triantafyllos Afouras

Tushar Nagarajan

Muhammad Maaz

Yale Song

Tengyu Ma

Shuming Hu

Hanoona Rasheed

Peize Sun

Po-Yao Huang

Daniel Bolya

Suyog Jain

Miguel Martin

Huiyu Wang

Nikhila Ravi

Shashank Jain

Tammy Stark

Shane Moon

Babak Damavandi

Vivian Lee

Andrew Westbury

Salman Khan

Philipp Krähenbühl

Piotr Dollar

Lorenzo Torresani

Kristen Grauman

Christoph Feichtenhofer

Publisher

arXiv

Research Topics

Computer Vision

Related Publications

November 11, 2025

COMPUTER VISION

SYSTEMS RESEARCH

CATransformers: Carbon Aware Transformers Through Joint Model-Hardware Optimization

Irene Wang, Mostafa Elhouishi, Ekin Sumbul, Samuel Hsia, Daniel Jiang, Newsha Ardalani, Divya Mahajan, Carole-Jean Wu, Bilge Acun

November 11, 2025

October 19, 2025

COMPUTER VISION

Enrich and Detect: Video Temporal Grounding with Multimodal LLMs

Shraman Pramanick, Effrosyni Mavroudi, Yale Song, Rama Chellappa, Lorenzo Torresani, Triantafyllos Afouras

October 19, 2025

October 19, 2025

RESEARCH

NLP

Controlling Multimodal LLMs via Reward-guided Decoding

Oscar Mañas, Pierluca D'Oro, Koustuv Sinha, Adriana Romero Soriano, Michal Drozdzal, Aishwarya Agrawal

October 19, 2025

September 23, 2025

RESEARCH

NLP

MetaEmbed: Scaling Multimodal Retrieval at Test-Time with Flexible Late Interactions

Zilin Xiao, Qi Ma, Mengting Gu, Jason Chen, Xintao Chen, Vicente Ordonez, Vijai Mohan

September 23, 2025

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.