2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00872
|View full text |Cite
|
Sign up to set email alerts
|

CNN-Generated Images Are Surprisingly Easy to Spot… for Now

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

8
511
0
2

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 629 publications
(574 citation statements)
references
References 30 publications
8
511
0
2
Order By: Relevance
“…Wang et al [57] ask whether it is possible to create a "universal" detector to distinguish real images from synthetic ones using a dataset of synthetic images generated by 11 CNN-based generative models. See Fig.…”
Section: A Universal Fake Vs Real Detectormentioning
confidence: 99%
See 1 more Smart Citation
“…Wang et al [57] ask whether it is possible to create a "universal" detector to distinguish real images from synthetic ones using a dataset of synthetic images generated by 11 CNN-based generative models. See Fig.…”
Section: A Universal Fake Vs Real Detectormentioning
confidence: 99%
“…Fortunately, or unfortunately, it is still possible to build deep networks that can detect the subtle artifacts in the doctored images (e.g. using the universal detectors mentioned above [57,58]). Moving forward, it is important to study whether and how GAN evaluation measures can help us mitigate the threat from deepfakes.…”
Section: Connection To Deepfakesmentioning
confidence: 99%
“…GAN has shown the capability of generating realistic images [34,[41][42][43][44][45][46][47], and have a lot of advances and types in many perspectives such as latent space [12][13][14][15][16], network architecture [23][24][25][26], and objective function [17][18][19][20][21][22], and driven design applications such as style transfer [27,29,30], text-to-image translation [31,32,[48][49][50][51], face aging [52,53], image super resolution [28,54,55], and video generation [56]. A toxonomy of above mentioned methods and variations illustrated in Fig 3.…”
Section: Variation Of Gansmentioning
confidence: 99%
“…In response to this growing threat, (Agarwal et al, 2019) propose a forensic approach to identify fake videos by modeling people's facial expressions and speaking movements. In a similar vein to (Tay et al, 2020), (Matern et al, 2019;Yang et al, 2019a;Wang et al, , 2020 seek to exploit visual artifacts to detect face manipulations and deepfakes. Encouragingly, show that neural networks can easily learn to detect generated images even without exposure to training samples from those generators.…”
Section: Image and Video Generation And Defensementioning
confidence: 99%