2019 IEEE International Conference on Image Processing (ICIP) 2019
DOI: 10.1109/icip.2019.8803269
|View full text |Cite
|
Sign up to set email alerts
|

Generating Adversarial Examples By Makeup Attacks on Face Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
36
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 57 publications
(37 citation statements)
references
References 8 publications
0
36
0
Order By: Relevance
“…As CNNs are increasingly applied in practice, many researchers have begun to focus on the security of CNNs and have studied AEs in areas such as autonomous driving [36] and face recognition [37], [38] and found that AEs reduce the robustness of CNNs. AEs exist not only in computer vision fields, such as image classification [21]- [23], [39]- [41], object detection [42], [43], and semantic segmentation [44], [45], but also in the natural language processing [46], [47] and speech recognition [48], [49].…”
Section: Related Workmentioning
confidence: 99%
“…As CNNs are increasingly applied in practice, many researchers have begun to focus on the security of CNNs and have studied AEs in areas such as autonomous driving [36] and face recognition [37], [38] and found that AEs reduce the robustness of CNNs. AEs exist not only in computer vision fields, such as image classification [21]- [23], [39]- [41], object detection [42], [43], and semantic segmentation [44], [45], but also in the natural language processing [46], [47] and speech recognition [48], [49].…”
Section: Related Workmentioning
confidence: 99%
“…Zhu, Lu and Chiang (2019) aims to create negative examples by GAN to attack facial recognition models by applying makeup effects on facial images. In the experimental results, it is seen that the method created according to previous studies creates high quality facial makeup images [59]. Chen, Zhu and Wu (2019) used GAN to expand small-sized infrared datasets in their study.…”
Section: Work With Generative Adversarial Networkmentioning
confidence: 96%
“…(ii) Untargeted attacks: The goal of an untargeted attack is to lead the neural network to misclassify the inputs. An attacker can simply employ similar approaches of wearing a mask, glasses [32], makeup [42] or have expressions [22] to impersonate another subject, typically an enrollee within the enrolment dataset.…”
Section: Threat Modelmentioning
confidence: 99%