Research

Computer Vision

What Makes a Video a Video: Analyzing Temporal Information in Video Understanding Models and Datasets

June 18, 2018

Abstract

The ability to capture temporal information has been critical to the development of video understanding models. While there have been numerous attempts at modeling motion in videos, an explicit analysis of the effect of temporal information for video understanding is still missing. In this work, we aim to bridge this gap and ask the following question: How important is the motion in the video for recognizing the action? To this end, we propose two novel frameworks: (i) class-agnostic temporal generator and (ii) motion-invariant frame selector to reduce/remove motion for an ablation analysis without introducing other artifacts. This isolates the analysis of motion from other aspects of the video. The proposed frameworks provide a much tighter estimate of the effect of motion (from 25% to 6% on UCF101 and 15% to 5% on Kinetics) compared to baselines in our analysis. Our analysis provides critical insights about existing models like C3D, and how it could be made to achieve comparable results with a sparser set of frames.

Download the Paper

Related Publications

February 27, 2026

Human & Machine Intelligence

Unified Vision–Language Modeling via Concept Space Alignment

Yifu Qiu, Holger Schwenk, Paul-Ambroise Duquenne

February 27, 2026

February 11, 2026

Computer Vision

UniT: Unified Multimodal Chain-of-Thought Test-time Scaling

Leon Liangyu Chen, Haoyu Ma, Ziqi Huang, Xiaoliang Dai, Jialiang Wang, Zecheng He, Jianwei Yang, Chunyuan Li, Serena Yeung-Levy, Animesh Sinha, Chu Wang, Felix Juefei-Xu, Junzhe Sun, Zhipeng Fan

February 11, 2026

December 18, 2025

Computer Vision

Pixel Seal: Adversarial-only training for invisible image and video watermarking

Alexandre Mourachko, Hady Elsahar, Pierre Fernandez, Sylvestre Rebuffi, Tom Sander, Tomáš Souček, Tuan Tran, Valeriu Lacatusu

December 18, 2025

November 19, 2025

Computer Vision

SAM 3: Segment Anything with Concepts

Ronghang Hu, Peize Sun, Triantafyllos Afouras, Effrosyni Mavroudi, Katherine Xu, Tsung-Han Wu, Yu Zhou, Liliane Momeni, Shuangrui Ding, Sagar Vaze, Francois Porcher, Feng Li, Siyuan Li, Aishwarya Kamath, Ho Kei Cheng, Andrew Huang, Arpit Kalla, Baishan Guo, Chaitanya Ryali, Christoph Feichtenhofer, Didac Suris Coll-Vinent, Haitham Khedr, Jie Lei, Joseph Greer, Kalyan Vasudev Alwala, Kate Saenko, Laura Gustafson, Markus Marks, Meng Wang, Nicolas Carion, Nikhila Ravi, Pengchuan Zhang, Piotr Dollar, Rishi Hazra, Roman Rädle, Shoubhik Debnath, Tengyu Ma, Yuan-Ting Hu

November 19, 2025

June 11, 2019

Computer Vision

ELF OpenGo: An Analysis and Open Reimplementation of AlphaZero | Facebook AI Research

Yuandong Tian, Jerry Ma, Qucheng Gong, Shubho Sengupta, Zhuoyuan Chen, James Pinkerton, Larry Zitnick

June 11, 2019

April 30, 2018

NLP

Computer Vision

Mastering the Dungeon: Grounded Language Learning by Mechanical Turker Descent | Facebook AI Research

Zhilin Yang, Saizheng Zhang, Jack Urbanek, Will Feng, Alexander H. Miller, Arthur Szlam, Douwe Kiela, Jason Weston

April 30, 2018

October 10, 2016

Speech & Audio

Computer Vision

Polysemous Codes | Facebook AI Research

Matthijs Douze, Hervé Jégou, Florent Perronnin

October 10, 2016

June 18, 2018

Speech & Audio

Computer Vision

Low-shot learning with large-scale diffusion | Facebook AI Research

Matthijs Douze, Arthur Szlam, Bharath Hariharan, Hervé Jégou

June 18, 2018

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.