2020
DOI: 10.1007/978-3-030-66823-5_14
|View full text |Cite
|
Sign up to set email alerts
|

Disrupting Deepfakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
73
0
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 66 publications
(75 citation statements)
references
References 19 publications
1
73
0
1
Order By: Relevance
“…This can be applied only in some contexts. Ruiz et al [72] proposed a method based on adversarial attacks as a defense. By adding a specific noise in the image, they are able to make that image unusable by a deepfake generator.…”
Section: Controlled Acquisition Devicementioning
confidence: 99%
“…This can be applied only in some contexts. Ruiz et al [72] proposed a method based on adversarial attacks as a defense. By adding a specific noise in the image, they are able to make that image unusable by a deepfake generator.…”
Section: Controlled Acquisition Devicementioning
confidence: 99%
“…考虑到伪造检测算法 [72,87] 主要通过检测基于 GAN 模型生成人脸中特定的指纹特征来进行伪造图像 的鉴别, Neves 等 [111] 提出移除这些指纹特征来欺骗现有的人脸伪造检测算法 [79] . Boston University 的 Ruiz 等 [112] 借鉴了传统的对抗样本攻击算法干扰伪造人脸图像的生成过程, 从而在生成结果中添 加对抗扰动以实现攻击深度伪造算法的效果. Huang 等 [113] 提出可以通过对生成图像进行局部细节重 建来消除其中的伪造线索, 从而达到欺骗现有伪造检测方法的目的.…”
Section: 面向对抗样本攻击的可信检测研究unclassified
“…Specifically, when the face image is uploaded to the Internet, there will be malicious behaviors such as Deepfake. [8] can effectively prevent malicious face change by using adversarial examples. However, the attack method based on gradient iteration requires huge storage space to load the substitution model.…”
Section: Introductionmentioning
confidence: 99%