2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00337
|View full text |Cite
|
Sign up to set email alerts
|

Evading Deepfake-Image Detectors with White- and Black-Box Attacks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
96
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 132 publications
(97 citation statements)
references
References 14 publications
1
96
0
Order By: Relevance
“…For example, the popular attack methods such as FGSM [12] and PGD [13], use the gradient information of the model for the input images to generate perturbations which can be misclassified by detectors. Farid et al [14] also proposed that the forensic models could be invalidated by adversarial examples, which verified the unreliability of detectors. But since the structure of forensic models being unknown, these methods find a substitution model to perform their attack.…”
Section: Introductionmentioning
confidence: 94%
“…For example, the popular attack methods such as FGSM [12] and PGD [13], use the gradient information of the model for the input images to generate perturbations which can be misclassified by detectors. Farid et al [14] also proposed that the forensic models could be invalidated by adversarial examples, which verified the unreliability of detectors. But since the structure of forensic models being unknown, these methods find a substitution model to perform their attack.…”
Section: Introductionmentioning
confidence: 94%
“…In the field of deepfake detection, neural networks are widely used to distinguish forgery videos. However, due to inherent defects, neural networks cannot resist attacks of adversarial samples [86][87][88]. To this end, researchers need to design more robust algorithms that can In recent years, deepfake technologies, which rely on deep learning, are developing at an unprecedented rate.…”
Section: Antiforensicsmentioning
confidence: 99%
“…The majority of GAN-synthesized image detection methods are based on extracting signal level cues then train classifiers such as SVMs or deep neural networks to distinguish them from real images. Although high performance has been reported using these methods, they also suffer from some common drawbacks, including the lack of interpretability of the detection results, low robustness to laundering operations and adversarial attacks [11], and poor generalization across different synthesis methods. A different type of detection methods takes advantage Fig.…”
Section: Introductionmentioning
confidence: 99%