2020
DOI: 10.48550/arxiv.2006.11623
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FaceHack: Triggering backdoored facial recognition systems using facial characteristics

Abstract: Recent advances in Machine Learning (ML) have opened up new avenues for its extensive use in real-world applications. Facial recognition, specifically, is used from simple friend suggestions in socialmedia platforms to critical security applications for biometric validation in automated immigration at airports. Considering these scenarios, security vulnerabilities to such ML algorithms pose serious threats with severe outcomes. Recent work demonstrated that Deep Neural Networks (DNNs), typically used in facial… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 9 publications
(17 citation statements)
references
References 25 publications
(64 reference statements)
0
14
0
Order By: Relevance
“…The evaluations on MNIST and CIFAR10 dataset demonstrate [136] that around 90% and 50% of trigger images are reverted to their true predictions, respectively, which are less effective than other run-time inspection methods, e.g., [82]. On the other hand, it is acknowledged that this method compromises the CDAs for clean inputs relatively unacceptable [41]. Thus, it is less suitable as a wrapper around the trained model.…”
Section: A Blind Backdoor Removalmentioning
confidence: 99%
See 3 more Smart Citations
“…The evaluations on MNIST and CIFAR10 dataset demonstrate [136] that around 90% and 50% of trigger images are reverted to their true predictions, respectively, which are less effective than other run-time inspection methods, e.g., [82]. On the other hand, it is acknowledged that this method compromises the CDAs for clean inputs relatively unacceptable [41]. Thus, it is less suitable as a wrapper around the trained model.…”
Section: A Blind Backdoor Removalmentioning
confidence: 99%
“…Though both adversarial examples and backdoor triggers can hijack the model for misclassification, backdoor triggers offer maximal flexibility to an attacker to hijack the model using the most convenient secret. Consequently, an attacker has full control over converting the physical scene into a working adversarial input, where backdoor attacks are more robust to physical influences such as viewpoints and lighting [38]- [41]. The other main difference between adversarial examples and backdoor attacks is the affected stage of ML pipeline, as compared in Fig.…”
Section: A Adversarial Example Attackmentioning
confidence: 99%
See 2 more Smart Citations
“…A manipulated reflection can hide backdoor rules. The reflection images are just normal images from a data set, which are different from the training set and are added to a portion of the training set under the law of reflection and the camera principle [35] .…”
Section: ) Static Attackmentioning
confidence: 99%