2021
DOI: 10.1049/bme2.12045
|View full text |Cite
|
Sign up to set email alerts
|

An exploratory study of interpretability for face presentation attack detection

Abstract: Biometric recognition and presentation attack detection (PAD) methods strongly rely on deep learning algorithms. Though often more accurate, these models operate as complex black boxes. Interpretability tools are now being used to delve deeper into the operation of these methods, which is why this work advocates their integration in the PAD scenario. Building upon previous work, a face PAD model based on convolutional neural networks was implemented and evaluated both through traditional PAD metrics and with i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 56 publications
(79 reference statements)
0
4
0
Order By: Relevance
“…Additionally, the saliency maps are used to further train the classifier of the PAD system, thus enhancing its performance, as shown in Table 3. Compared to the recently published papers [25,26], the work presented here has the additional advantage of producing human-readable explanations.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Additionally, the saliency maps are used to further train the classifier of the PAD system, thus enhancing its performance, as shown in Table 3. Compared to the recently published papers [25,26], the work presented here has the additional advantage of producing human-readable explanations.…”
Section: Resultsmentioning
confidence: 99%
“…Much of the current literature in this area pays particular attention to defining "what is the explanation". Visualization of the filters in a CNN, also referred to as perceptive interpretability methods [14,25,26], is one of the direct ways to explore patterns hidden within the neural units. The Up-convolutional network [27] was developed to reverse the feature map back to an image.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, great efforts have been achieved on interpretable FAS [183]. Some methods try to localize the spoof regions according to the feature activation using visual interpretability tools (e.g., Grad-CAM [184]) or soft-gating strategy [131].…”
Section: Architecture Supervision and Interpretabilitymentioning
confidence: 99%