2019 IEEE International Workshop on Information Forensics and Security (WIFS) 2019
DOI: 10.1109/wifs47025.2019.9035107
|View full text |Cite
|
Sign up to set email alerts
|

Detecting and Simulating Artifacts in GAN Fake Images

Abstract: To detect GAN generated images, conventional supervised machine learning algorithms require collection of a number of real and fake images from the targeted GAN model. However, the specific model used by the attacker is often unavailable. To address this, we propose a GAN simulator, AutoGAN, which can simulate the artifacts produced by the common pipeline shared by several popular GAN models. Additionally, we identify a unique artifact caused by the up-sampling component included in the common GAN pipeline. We… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
246
1
2

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 371 publications
(273 citation statements)
references
References 20 publications
4
246
1
2
Order By: Relevance
“…They described a new method based on monitoring neuron behaviors of a dedicated CNN to detect faces generated by Deepfake technologies. The comparison with Zhang et al [17] demonstrated an average detection accuracy of more than 90%…”
Section: B Deepfake Detection Methodsmentioning
confidence: 79%
See 3 more Smart Citations
“…They described a new method based on monitoring neuron behaviors of a dedicated CNN to detect faces generated by Deepfake technologies. The comparison with Zhang et al [17] demonstrated an average detection accuracy of more than 90%…”
Section: B Deepfake Detection Methodsmentioning
confidence: 79%
“…This preliminary insight was detected by Guarnera et al [4] in which the authors tried to roughly detect Deepfakes by means of well-known forgery detection tools ( [2], [15], [16]) with only few insights for future works as results. The analysis in the Fourier domain was employed by Zhang et al [17] in a rather naive strategy which delivered in any case good performances. Later, an interesting work known as FakeSpotter was proposed by Wang et al [18].…”
Section: B Deepfake Detection Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Although easily fooling the human, the state-of-the-art synthesized images can still be detected in many cases by current fake detection methods. The state-of-the-art synthesized methods often introduce artifact patterns into the image during generation, opening a chance for fake detectors [14,59]. Due to the current technical limitation, even worse, the image manipulation footprint will be inevitably left in a synthesized image, either by partial image manipulation [11,18,34] or full image synthesis [29][30][31].…”
Section: Introductionmentioning
confidence: 99%