June 04, 2023
We address the task of weakly-supervised few-shot image classification and segmentation, by leveraging a Vision Transformer (ViT) pretrained with self-supervision. Our proposed method takes token representations from the self-supervised ViT and leverages their correlations, via self-attention, to produce classification and segmentation predictions through separate task heads. Our model is able to effectively learn to perform classification and segmentation in the absence of pixel-level labels during training, using only image-level labels. To do this it uses attention maps, created from tokens generated by the self- supervised ViT backbone, as pixel-level pseudo-labels. We also explore a practical setup with “mixed” supervision, where a small number of training images contains ground-truth pixel-level labels and the remaining images have only image-level labels. For this mixed setup, we propose to improve the pseudo-labels using a pseudo-label enhancer that was trained using the available ground-truth pixel-level labels. Experiments on Pascal-5i and COCO-20i demonstrate significant performance gains in a variety of supervision settings, and in particular when little-to-no pixel-level labels are available.
Written by
Dahyun Kang
Peter Koniusz
Minsu Cho
Naila Murray
Publisher
CVPR
Research Topics
September 30, 2023
Pierre Fernandez, Guillaume Couairon, Hervé Jegou, Matthijs Douze, Teddy Furon
September 30, 2023
September 29, 2023
Yiming Li, Qi Fang, Jiamu Bai, Siheng Chen, Felix Xu, Chen Feng
September 29, 2023
September 27, 2023
Xiaoliang Dai, Ji Hou, Kevin Chih-Yao Ma, Sam Tsai, Jialiang Wang, Rui Wang, Peizhao Zhang, Simon Vandenhende, Xiaofang Wang, Abhimanyu Dubey, Matthew Yu, Abhishek Kadian, Filip Radenovic, Dhruv Mahajan, Kunpeng Li, Yue (R) Zhao, Vladan Petrovic, Mitesh Kumar Singh, Simran Motwani, Yiwen Song, Yi Wen, Roshan Sumbaly, Vignesh Ramanathan, Zijian He, Peter Vajda, Devi Parikh
September 27, 2023
September 22, 2023
Shuangzhi Li, Zhijie Wang, Felix Xu, Qing Guo, Xingyu Li, Lei Ma
September 22, 2023
Who We Are
Our Actions
Newsletter