May 09, 2023
We present IMAGEBIND, an approach to learn a joint embedding across six different modalities - images, text, audio, depth, thermal, and IMU data. We show that all combinations of paired data are not necessary to train such a joint embedding, and only image-paired data is sufficient to bind the modalities together. IMAGEBIND can leverage recent large scale vision-language models, and extends their zeroshot capabilities to new modalities just by using their natural pairing with images. It enables novel emergent applications ‘out-of-the-box’ including cross-modal retrieval, composing modalities with arithmetic, cross-modal detection and generation. The emergent capabilities improve with the strength of the image encoder and we set a new state-of-theart on emergent zero-shot recognition tasks across modalities, outperforming specialist supervised models. Finally, we show strong few-shot recognition results outperforming prior work, and that IMAGEBIND serves as a new way to evaluate vision models for visual and non-visual tasks
Written by
Alaa El-Nouby
Zhuang Liu
Kalyan Vasudev Alwala
Publisher
CVPR
Research Topics
September 30, 2023
Pierre Fernandez, Guillaume Couairon, Hervé Jegou, Matthijs Douze, Teddy Furon
September 30, 2023
September 29, 2023
Yiming Li, Qi Fang, Jiamu Bai, Siheng Chen, Felix Xu, Chen Feng
September 29, 2023
September 27, 2023
Xiaoliang Dai, Ji Hou, Kevin Chih-Yao Ma, Sam Tsai, Jialiang Wang, Rui Wang, Peizhao Zhang, Simon Vandenhende, Xiaofang Wang, Abhimanyu Dubey, Matthew Yu, Abhishek Kadian, Filip Radenovic, Dhruv Mahajan, Kunpeng Li, Yue (R) Zhao, Vladan Petrovic, Mitesh Kumar Singh, Simran Motwani, Yiwen Song, Yi Wen, Roshan Sumbaly, Vignesh Ramanathan, Zijian He, Peter Vajda, Devi Parikh
September 27, 2023
September 22, 2023
Shuangzhi Li, Zhijie Wang, Felix Xu, Qing Guo, Xingyu Li, Lei Ma
September 22, 2023
Who We Are
Our Actions
Newsletter