May 4, 2021
We show how feature maps in convolutional networks are susceptible to spatial bias. Due to a combination of architectural choices, the activation at certain loca- tions is systematically elevated or weakened. The major source of this bias is the padding mechanism. Depending on several aspects of convolution arithmetic, this mechanism can apply the padding unevenly, leading to asymmetries in the learned weights. We demonstrate how such bias can be detrimental to certain tasks such as small object detection: the activation is suppressed if the stimulus lies in the impacted area, leading to blind spots and misdetection. We propose solutions to mitigate spatial bias and demonstrate how they can improve model accuracy.
Written by
Bilal Alsallakh
Narine Kokhlikyan
Vivek Miglani
Jun Yuan
Orion Reblitz-Richardson
Publisher
ICLR 2021
Research Topics
Core Machine Learning
December 05, 2020
Deepak Pathak, Abhinav Gupta, Mustafa Mukadam, Shikhar Bahl
December 05, 2020
December 07, 2020
Yuandong Tian, Qucheng Gong, Tina Jiang
December 07, 2020
March 13, 2021
Baohe Zhang, Raghu Rajan, Luis Pineda, Nathan Lambert, Andre Biedenkapp, Kurtland Chua, Frank Hutter, Roberto Calandra
March 13, 2021
October 10, 2020
Luis Pineda, Sumana Basu, Adriana Romero,Roberto CalandraRoberto Calandra, Michal Drozdzal
October 10, 2020
December 05, 2020
Andrea Tirinzonin, Matteo Pirotta, Marcello Restelli, Alessandro Lazaric
December 05, 2020
Foundational models
Latest news
Foundational models