2020
DOI: 10.48550/arxiv.2002.12749
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…This shows one of the vulnerabilities in deepfake detectors. Besides, the work by Hussain et al showed that DNN based deepfake detectors are not robust towards adversarially modified fake videos [7]. These types of adversarial perturbations possess infinite risks as they can be used to attack both image and video compression codecs [7].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…This shows one of the vulnerabilities in deepfake detectors. Besides, the work by Hussain et al showed that DNN based deepfake detectors are not robust towards adversarially modified fake videos [7]. These types of adversarial perturbations possess infinite risks as they can be used to attack both image and video compression codecs [7].…”
Section: Related Workmentioning
confidence: 99%
“…Besides, the work by Hussain et al showed that DNN based deepfake detectors are not robust towards adversarially modified fake videos [7]. These types of adversarial perturbations possess infinite risks as they can be used to attack both image and video compression codecs [7]. In addition, adversarial perturbations have been shown to be transferable between different models [14].…”
Section: Related Workmentioning
confidence: 99%
“…Unfortunately, having these detectors is still not sufficient to prevent the aforementioned misuse of fake images due to the rise of adversarial machine learning attacks. It has been reported that fake image detectors are vulnerable to adversarial input attacks [48] whereby attackers inject noises to the fake images to mislead the detector into labeling the perturbed fake image as real. In this paper, we propose a new defensive mechanism targeting the above data poisoning attacks on the deepfake detectors.…”
Section: Dnn-based Deepfake Detection Modelmentioning
confidence: 99%