RESEARCH

NLP

Visual Curiosity: Learning to Ask Questions to Learn Visual Recognition

October 29, 2018

Abstract

In an open-world setting, it is inevitable that an intelligent agent (e.g., a robot) will encounter visual objects, attributes or relationships it does not recognize. In this work, we develop an agent empowered with visual curiosity, i.e. the ability to ask questions to an Oracle (e.g., human) about the contents in images (e.g., What is the object on the left side of the red cube?) and build visual recognition model based on the answers received (e.g., Cylinder). In order to do this, the agent must (1) understand what it recognizes and what it does not, (2) formulate a valid, unambiguous and informative language query (a question) to ask the Oracle, (3) derive the parameters of visual classifiers from the Oracle response and (4) leverage the updated visual classifiers to ask more clarified questions. Specifically, we propose a novel framework and formulate the learning of visual curiosity as a reinforcement learning problem. In this framework, all components of our agent, visual recognition module (to see), question generation policy (to ask), answer digestion module (to understand) and graph memory module (to memorize), are learned entirely end-to-end to maximize the reward derived from the scene graph obtained by the agent as a consequence of the dialog with the Oracle. Importantly, the question generation policy is disentangled from the visual recognition system and specifics of the environment. Consequently, we demonstrate a sort of double generalization. Our question generation policy generalizes to new environments and a new pair of eyes, i.e., new visual system. Trained on a synthetic dataset, our results show that our agent learns new visual concepts significantly faster than several heuristic baselines, even when tested on synthetic environments with novel objects, as well as in a realistic environment.

Download the Paper

AUTHORS

Written by

Dhruv Batra

Devi Parikh

Jianwei Yang

Jiasen Lu

Stefan Lee

Publisher

CoRL

Related Publications

September 05, 2024

CONVERSATIONAL AI

NLP

Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model

Chunting Zhou, Lili Yu, Arun Babu, Kushal Tirumala, Michihiro Yasunaga, Leonid Shamis, Jacob Kahn, Luke Zettlemoyer, Omer Levy, Xuezhe Ma

September 05, 2024

August 20, 2024

CONVERSATIONAL AI

NLP

Lumos : Empowering Multimodal LLMs with Scene Text Recognition

Ashish Shenoy, Yichao Lu, Srihari Jayakumar, Debojeet Chatterjee, Mohsen Moslehpour, Pierce Chuang, Abhay Harpale, Vikas Bhardwaj, Di Xu (SWE), Shicong Zhao, Ankit Ramchandani, Luna Dong, Anuj Kumar

August 20, 2024

August 11, 2024

NLP

LM Transparency Tool: Interactive Tool for Analyzing Transformer Language Models

Igor Tufanov, Karen Hambardzumyan, Javier Ferrando, Lena Voita

August 11, 2024

August 11, 2024

NLP

MuTox: Universal MUltilingual Audio-based TOXicity Dataset and Zero-shot Detector

Marta R. Costa-jussa, Mariano Coria Meglioli, Pierre Andrews, David Dale, Kae Hansanti, Elahe Kalbassi, Christophe Ropers, Carleigh Wood

August 11, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.