COMPUTER VISION

Enrich and Detect: Video Temporal Grounding with Multimodal LLMs

October 19, 2025

Abstract

We introduce ED-VTG, a method for fine-grained video temporal grounding utilizing multi-modal large language models. Our approach harnesses the capabilities of multimodal LLMs to jointly process text and video, in order to effectively localize natural language queries in videos through a two-stage process. Rather than being directly grounded, language queries are initially transformed into enriched sentences that incorporate missing details and cues to aid in grounding. In the second stage, these enriched queries are grounded, using a lightweight decoder, which specializes at predicting accurate boundaries conditioned on contextualized representations of the enriched queries. To mitigate noise and reduce the impact of hallucinations, our model is trained with a multiple-instance-learning ob- jective that dynamically selects the optimal version of the query for each training sample. We demonstrate state-of- the-art results across various benchmarks in temporal video grounding and paragraph grounding settings. Experiments reveal that our method significantly outperforms all previ- ously proposed LLM-based temporal grounding approaches and is either superior or comparable to specialized models, while maintaining a clear advantage against them in zero- shot evaluation scenarios.

Download the Paper

AUTHORS

Written by

Shraman Pramanick

Effrosyni Mavroudi

Yale Song

Rama Chellappa

Lorenzo Torresani

Triantafyllos Afouras

Publisher

ICCV 2025

Research Topics

Computer Vision

Related Publications

November 11, 2025

COMPUTER VISION

SYSTEMS RESEARCH

CATransformers: Carbon Aware Transformers Through Joint Model-Hardware Optimization

Irene Wang, Mostafa Elhouishi, Ekin Sumbul, Samuel Hsia, Daniel Jiang, Newsha Ardalani, Divya Mahajan, Carole-Jean Wu, Bilge Acun

November 11, 2025

October 19, 2025

RESEARCH

NLP

Controlling Multimodal LLMs via Reward-guided Decoding

Oscar Mañas, Pierluca D'Oro, Koustuv Sinha, Adriana Romero Soriano, Michal Drozdzal, Aishwarya Agrawal

October 19, 2025

September 23, 2025

RESEARCH

NLP

MetaEmbed: Scaling Multimodal Retrieval at Test-Time with Flexible Late Interactions

Zilin Xiao, Qi Ma, Mengting Gu, Jason Chen, Xintao Chen, Vicente Ordonez, Vijai Mohan

September 23, 2025

August 14, 2025

RESEARCH

COMPUTER VISION

DINOv3

Oriane Siméoni, Huy V. Vo, Maximilian Seitzer, Federico Baldassarre, Maxime Oquab, Cijo Jose, Vasil Khalidov, Marc Szafraniec, Seungeun Yi, Michaël Ramamonjisoa, Francisco Massa, Daniel Haziza, Luca Wehrstedt, Jianyuan Wang, Timothée Darcet, Theo Moutakanni, Leonel Sentana, Claire Roberts, Andrea Vedaldi, Jamie Tolan, John Brandt, Camille Couprie, Julien Mairal, Herve Jegou, Patrick Labatut, Piotr Bojanowski

August 14, 2025

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.