Fighting Fake News: Image Splice Detection via Learned Self-Consistency


Minyoung Huh*1, 2
Andrew Liu*1
Andrew Owens1
Alexei A. Efros1

1UC Berkeley

2Carnegie Mellon University

Code [GitHub]
Paper (10.3 MB)




Abstract

Advances in photo editing and manipulation tools have made it significantly easier to create fake imagery. Learning to detect such manipulations, however, remains a challenging problem due to the lack of sufficient training data. In this paper, we propose a model that learns to detect visual manipulations from unlabeled data through self-supervision. Given a large collection of real photographs with automatically recorded EXIF metadata, we train a model to determine whether an image is self-consistent — that is, whether its content could have been produced by a single imaging pipeline. We apply this self-supervised learning method to the task of detecting and localizing image splices. Although the proposed model obtains state-of-the-art performance on several benchmarks, we see it as merely a step in the long quest for a truly general-purpose visual forensics tool.

Video



Exif Consistency Training


[GitHub]


In-the-Wild Image Splice Dataset


[Download (89.2 MB)]


Paper

M. Huh, A. Liu, A. Owens, A. A. Efros,
Fighting Fake News: Image Splice
Detection via Learned Self-Consistency

arXiv preprint, 2018 (Arxiv).

[Bibtex]


Acknowledgements

This work was supported, in part, by DARPA grant FA8750-16-C-0166 and UC Berkeley Center for Long-Term Cybersecurity. We thank Hany Farid and Shruti Agarwal for their advice, assistance, and inspiration in building this project, David Fouhey and Allan Jabri for helping with the editing, and Abhinav Gupta for letting us use his GPUs. Finally, we thank the many Reddit and Onion artists who unknowingly contributed to our dataset

This work was supported, in part, by DARPA grant FA8750-16-C-0166 and UC Berkeley Center for Long-Term Cybersecurity.

This website template is being rented from our colorful landlords.