RESEARCH

COMPUTER VISION

Fixing the train-test resolution discrepancy

December 09, 2019

Abstract

Data-augmentation is key to the training of neural networks for image classifi- cation. This paper first shows that existing augmentations induce a significant discrepancy between the size of the objects seen by the classifier at train and test time: in fact, a lower train resolution improves the classification at test time! We then propose a simple strategy to optimize the classifier performance, that employs different train and test resolutions. It relies on a computationally cheap fine-tuning of the network at the test resolution. This enables training strong classifiers using small training images, and therefore significantly reduce the training time. For instance, we obtain 77.1% top-1 accuracy on ImageNet with a ResNet-50 trained on 128×128 images, and 79.8% with one trained at 224×224. A ResNeXt-101 32x48d pre-trained with weak supervision on 940 million 224×224 images and further optimized with our technique for test resolution 320×320 achieves 86.4% top-1 accuracy (top-5: 98.0%). To the best of our knowledge this is the highest ImageNet single-crop accuracy to date.

Download the Paper

AUTHORS

Written by

Andrea Vedaldi

Hervé Jegou

Hugo Touvron

Matthijs Douze

Publisher

NeurIPS

Research Topics

Computer Vision

Related Publications

September 30, 2023

INTEGRITY

COMPUTER VISION

The Stable Signature: Rooting Watermarks in Latent Diffusion Models

Pierre Fernandez, Guillaume Couairon, Hervé Jegou, Matthijs Douze, Teddy Furon

September 30, 2023

September 29, 2023

COMPUTER VISION

Among Us: Adversarially Robust Collaborative Perception by Consensus

Yiming Li, Qi Fang, Jiamu Bai, Siheng Chen, Felix Xu, Chen Feng

September 29, 2023

September 27, 2023

COMPUTER VISION

Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack

Xiaoliang Dai, Ji Hou, Kevin Chih-Yao Ma, Sam Tsai, Jialiang Wang, Rui Wang, Peizhao Zhang, Simon Vandenhende, Xiaofang Wang, Abhimanyu Dubey, Matthew Yu, Abhishek Kadian, Filip Radenovic, Dhruv Mahajan, Kunpeng Li, Yue (R) Zhao, Vladan Petrovic, Mitesh Kumar Singh, Simran Motwani, Yiwen Song, Yi Wen, Roshan Sumbaly, Vignesh Ramanathan, Zijian He, Peter Vajda, Devi Parikh

September 27, 2023

September 22, 2023

COMPUTER VISION

CORE MACHINE LEARNING

Common Corruption Robustness of Point Cloud Detectors: Benchmark and Enhancement

Shuangzhi Li, Zhijie Wang, Felix Xu, Qing Guo, Xingyu Li, Lei Ma

September 22, 2023

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.