2021
DOI: 10.3390/fi13110288
|View full text |Cite
|
Sign up to set email alerts
|

Deepfake-Image Anti-Forensics with Adversarial Examples Attacks

Abstract: Many deepfake-image forensic detectors have been proposed and improved due to the development of synthetic techniques. However, recent studies show that most of these detectors are not immune to adversarial example attacks. Therefore, understanding the impact of adversarial examples on their performance is an important step towards improving deepfake-image detectors. This study developed an anti-forensics case study of two popular general deepfake detectors based on their accuracy and generalization. Herein, w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 29 publications
0
3
0
Order By: Relevance
“…A successful adversarial attack requires embedding imperceptible noise perturbations into the fake sample, which deceives the detector into classifying the image as "real". Several classic adversarial attack methods, including Fast Gradient Sign Method (FGSM) [45], iterative FGSM [46], Carlini and Wagner l 2 -norm Attack [47], DeepFool [48] and Projected Gradient Descent (PGD) [49], are explored to expose the vulnerability of DeepFake detectors in both white-and blackbox scenarios [3]- [7], [14], [15]. Liao et al [8] improved on the efficiency of these attacks by adding perturbations to key regions of the DeepFakes instead of across the entire image.…”
Section: Anti-forensics For Deepfakesmentioning
confidence: 99%
See 2 more Smart Citations
“…A successful adversarial attack requires embedding imperceptible noise perturbations into the fake sample, which deceives the detector into classifying the image as "real". Several classic adversarial attack methods, including Fast Gradient Sign Method (FGSM) [45], iterative FGSM [46], Carlini and Wagner l 2 -norm Attack [47], DeepFool [48] and Projected Gradient Descent (PGD) [49], are explored to expose the vulnerability of DeepFake detectors in both white-and blackbox scenarios [3]- [7], [14], [15]. Liao et al [8] improved on the efficiency of these attacks by adding perturbations to key regions of the DeepFakes instead of across the entire image.…”
Section: Anti-forensics For Deepfakesmentioning
confidence: 99%
“…• Blurring: images are blurred with a Gaussian filter with a kernel size randomly sampled from (3,5,7,9).…”
Section: Victim Detector's Capabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…Ding et al [47][48][49][50][51]53] propose anti-forensic tools and methods by which to bypass "DeepFake" detection on videos. On the other hand, Zhao, X et al [52,54] apply it to "DeepFake" detection in GAN-generated images.…”
mentioning
confidence: 99%
“…Studies [47][48][49][50][51][52][53][54] propose different strategies and models for generating DeepFakes that can evade forensic detection. Paper [47] propose GAN models with additional features and loss functions designed to improve visual quality and model efficiency.…”
mentioning
confidence: 99%