2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS) 2018
DOI: 10.1109/btas.2018.8698550
|View full text |Cite
|
Sign up to set email alerts
|

Spoofing Deep Face Recognition with Custom Silicone Masks

Abstract: We investigate the vulnerability of convolutional neural network (CNN) based face-recognition (FR) systems to presentation attacks (PA) performed using custom-made silicone masks. Previous works have studied the vulnerability of CNN-FR systems to 2D PAs such as print-attacks, or digitalvideo replay attacks, and to rigid 3D masks. This is the first study to consider PAs performed using custom-made flexible silicone masks. Before embarking on research on detecting a new variety of PA, it is important to estimate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
57
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 89 publications
(58 citation statements)
references
References 16 publications
0
57
0
Order By: Relevance
“…In [42] Bhattacharjee et al showed that it is possible to spoof commercial face recognition systems with custom silicone masks. They also proposed to use the mean temperature of the face region for PAD.…”
Section: Multi-channel Based Approaches and Datasets For Face Padmentioning
confidence: 99%
“…In [42] Bhattacharjee et al showed that it is possible to spoof commercial face recognition systems with custom silicone masks. They also proposed to use the mean temperature of the face region for PAD.…”
Section: Multi-channel Based Approaches and Datasets For Face Padmentioning
confidence: 99%
“…As attack techniques are constantly upgraded, some new types of presentation attacks have emerged, e.g., 3D [10] and silicone masks [1]. These attacks are more realistic than traditional 2D attacks.…”
Section: A Datasetmentioning
confidence: 99%
“…F ACE anti-spoofing aims to determine whether the captured face from a face recognition system is real or fake. With the development of deep Convolutional Neural Networks (CNNs), face recognition [1]- [5] has achieved near-perfect recognition performance and already has been applied in our daily life, such as phone unlock, access control and face payment. However, these face recognition systems are prone to be attacked in various ways including print attack, video replay attack and 2D/3D mask attack, causing the recognition result to become unreliable.…”
Section: Introductionmentioning
confidence: 99%
“…Current face-PAD methods can be classified regarding the following standpoints: i) from the hardware used for data acquisition as rgb-only [11,21,26] or additional sensors [2,23] approaches; ii) from the required user interaction as active [14] or passive [16,26] methods; iii) from the input data type as single-frame [26] or video-based [1,22] approaches; iv) and, finally, depending on the feature extraction and classification strategy as hand-crafted [4,26] or deep learning [13,16]. Based on these classifications, we can depict that the most challenging scenario occurs when data is captured using rgb-only sensors using passive approaches that avoid any challenge-response interaction with the user (e.g.…”
Section: Related Workmentioning
confidence: 99%
“…The GRAD-GPAD framework simplifies such a dynamic structure thanks to its scalable nature and only requires the data to be split into three subsets in order to evaluate face-PAD [22], we have split CASIA-FASD [31], Rose-Youtu [17] and SiW [13], keeping the test subset unmodified and splitting the original training set in a training subset comprising 80% of the users and a development subset comprising the remaining 20%. Furthermore, CS-MAD [2] does not contain explicit subsets, so we randomly partitioned the data into the mentioned subsets (40% in Train, 30% in Dev and 30% in Test) from the users' identities. Finally MSU-MFSD [26] is originally divided in two folds, nevertheless we re-divided it based on this Python package † .…”
Section: The Aggregated Datasetmentioning
confidence: 99%