COMPUTER VISION

ImageBind: One Embedding Space To Bind Them All

May 09, 2023

Abstract

We present IMAGEBIND, an approach to learn a joint embedding across six different modalities - images, text, audio, depth, thermal, and IMU data. We show that all combinations of paired data are not necessary to train such a joint embedding, and only image-paired data is sufficient to bind the modalities together. IMAGEBIND can leverage recent large scale vision-language models, and extends their zeroshot capabilities to new modalities just by using their natural pairing with images. It enables novel emergent applications ‘out-of-the-box’ including cross-modal retrieval, composing modalities with arithmetic, cross-modal detection and generation. The emergent capabilities improve with the strength of the image encoder and we set a new state-of-theart on emergent zero-shot recognition tasks across modalities, outperforming specialist supervised models. Finally, we show strong few-shot recognition results outperforming prior work, and that IMAGEBIND serves as a new way to evaluate vision models for visual and non-visual tasks

Download the Paper

AUTHORS

Written by

Rohit Girdhar

Alaa El-Nouby

Zhuang Liu

Mannat Singh

Kalyan Vasudev Alwala

Armand Joulin

Ishan Misra

Publisher

CVPR

Research Topics

Computer Vision

Related Publications

June 20, 2024

COMPUTER VISION

ICON: Incremental CONfidence for Joint Pose and Radiance Field Optimization

Weiyao Wang, Pierre Gleize, Hao Tang, Xingyu Chen, Kevin Liang, Matt Feiszli

June 20, 2024

June 17, 2024

COMPUTER VISION

Move Anything with Layered Scene Diffusion

Jiawei Ren, Frost Xu, Jerry Wu, Ziwei Liu, Tao Xiang, Antoine Toisoul

June 17, 2024

June 14, 2024

COMPUTER VISION

Decomposed evaluations of geographic disparities in text-to-image models

Abhishek Sureddy, Dishant Padalia, Nandhinee Periyakaruppa, Oindrila Saha, Adina Williams, Adriana Romero Soriano, Megan Richards, Polina Kirichenko, Melissa Hall

June 14, 2024

June 05, 2024

COMPUTER VISION

Cache Me if You Can: Accelerating Diffusion Models through Block Caching

Felix Wimbauer, Bichen Wu, Edgar Schoenfeld, Ji Hou, Zijian He, Artsiom Sanakoyeu, Peizhao Zhang, Sam Tsai, Jonas Kohler, Christian Rupprecht, Daniel Cramers, Peter Vajda, Jialiang Wang

June 05, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.