COMPUTER VISION

Learning to Generate Grounded Visual Captions without Localization Supervision

July 17, 2020

Abstract

When automatically generating a sentence description for an image or video, it often remains unclear how well the generated caption is grounded, that is whether the model uses the correct image regions to output particular words, or if the model is hallucinating based on priors in the dataset and/or the language model. The most common way of relating image regions with words in caption models is through an attention mechanism over the regions that are used as input to predict the next word. The model must therefore learn to predict the attentional weights without knowing the word it should localize. This is difficult to train without grounding supervision since recurrent models can propagate past information and there is no explicit signal to force the captioning model to properly ground the individual decoded words. In this work, we help the model to achieve this via a novel cyclical training regimen that forces the model to localize each word in the image after the sentence decoder generates it, and then reconstruct the sentence from the localized image region(s) to match the ground-truth. Our pro-posed framework only requires learning one extra fully-connected layer (the localizer), a layer that can be removed at test time. We show that our model significantly improves grounding accuracy without relying on grounding supervision or introducing extra computation during inference, for both image and video captioning tasks. Code is available at https://github.com/chihyaoma/cyclical-visual-captioning.

Download the Paper

AUTHORS

Written by

Marcus Rohrbach

Peter Vajda

Chih-Yao Ma

Ghassan AlRegib

Yannis Kalantidis

Zsolt Kira

Publisher

ECCV

Research Topics

Computer Vision

Related Publications

December 12, 2024

COMPUTER VISION

EvalGIM: A Library for Evaluating Generative Image Models

Melissa Hall, Oscar MaƱas, Reyhane Askari, Mark Ibrahim, Candace Ross, Pietro Astolfi, Tariq Berrada Ifriqi, Marton Havasi, Yohann Benchetrit, Karen Ullrich, Carolina Braga, Abhishek Charnalia, Maeve Ryan, Mike Rabbat, Michal Drozdzal, Jakob Verbeek, Adriana Romero Soriano

December 12, 2024

December 11, 2024

COMPUTER VISION

Video Seal: Open and Efficient Video Watermarking

Pierre Fernandez, Hady Elsahar, Zeki Yalniz, Alexandre Mourachko

December 11, 2024

December 11, 2024

NLP

COMPUTER VISION

Meta CLIP 1.2

Hu Xu, Bernie Huang, Ellen Tan, Ching-Feng Yeh, Jacob Kahn, Christine Jou, Gargi Ghosh, Omer Levy, Luke Zettlemoyer, Scott Yih, Philippe Brunet, Kim Hazelwood, Ramya Raghavendra, Daniel Li (FAIR), Saining Xie, Christoph Feichtenhofer

December 11, 2024

December 11, 2024

COMPUTER VISION

Measuring Deja Vu Memorization Efficiently

Narine Kokhlikyan, Bargav Jayaraman, Florian Bordes, Chuan Guo, Kamalika Chaudhuri

December 11, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.