February 7, 2020
Detecting manipulated images has become a significant emerging challenge. The advent of image sharing platforms and the easy availability of advanced photo editing software have resulted in a large quantities of manipulated images being shared on the internet. While the intent behind such manipulations varies widely, concerns on the spread of false news and misinformation is growing. Current state of the art methods for detecting these manipulated images suffers from the lack of training data due to the laborious labeling process. We address this problem in this paper, for which we introduce a manipulated image generation process that creates true positives using currently available datasets. Drawing from traditional work on image blending, we propose a novel generator for creating such examples. In addition, we also propose to further create examples that force the algorithm to focus on boundary artifacts during training. Strong experimental results validate our proposal.
Written by
Research Topics
April 08, 2021
Caner Hazirbas, Joanna Bitton, Brian Dolhansky, Jacqueline Pan, Albert Gordo, Cristian Canton Ferrer
April 08, 2021
February 07, 2020
Peng Zhou, Bor-Chun Chen, Xintong Han, Mahyar Najibi, Abhinav Shrivastava, Ser-Nam Lim, Larry S. Davis
February 07, 2020
February 24, 2018
Kim Hazelwood, Sarah Bird, David Brooks, Soumith Chintala, Utku Diril, Dmytro Dzhulgakov, Mohamed Fawzy, Bill Jia, Yangqing Jia, Aditya Kalro, James Law, Kevin Lee, Jason Lu, Pieter Noordhuis, Misha Smelyanskiy, Liang Xiong, Xiaodong Wang
February 24, 2018
April 30, 2018
Chuan Guo, Mayank Rana, Moustapha Cisse, Laurens van der Maaten
April 30, 2018
November 02, 2019
Emily Dinan, Samuel Humeau, Bharath Chintagunta, Jason Weston
November 02, 2019
May 31, 2019
Pushkar Mishra, Marco Del Tredici, Helen Yannakoudakis, Ekaterina Shutova
May 31, 2019
June 15, 2019
Kaiming He, Yuxin Wu, Laurens van der Maaten, Alan Yuille, Cihang Xie
June 15, 2019
Foundational models
Latest news
Foundational models