2020
DOI: 10.1186/s13640-020-0490-z
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial attacks on fingerprint liveness detection

Abstract: Deep neural networks are vulnerable to adversarial samples, posing potential threats to the applications deployed with deep learning models in practical conditions. A typical example is the fingerprint liveness detection module in fingerprint authentication systems. Inspired by great progress of deep learning, deep networks-based fingerprint liveness detection algorithms spring up and dominate the field. Thus, we investigate the feasibility of deceiving state-of-the-art deep networks-based fingerprint liveness… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(9 citation statements)
references
References 29 publications
(32 reference statements)
0
9
0
Order By: Relevance
“…We have tested existing adversarial methods in this study, including FGSM, MI-FGSM, Deepfool, L-BFGS, C&W, BIM, Foolbox, PGD, and JSMA [16]. Our goal is to compromise existing DL algorithms so that each recognition system misclassifies data with the fewest number of perturbations.…”
Section: Test Resultsmentioning
confidence: 99%
“…We have tested existing adversarial methods in this study, including FGSM, MI-FGSM, Deepfool, L-BFGS, C&W, BIM, Foolbox, PGD, and JSMA [16]. Our goal is to compromise existing DL algorithms so that each recognition system misclassifies data with the fewest number of perturbations.…”
Section: Test Resultsmentioning
confidence: 99%
“…This research area may be investigated more to promote research in the diversity of human personalities. Although DL models are robust and produce state-of-the-art performances, but they are still susceptible to security challenges/attacks such as adversarial attacks, spoofing attacks, and presentation attacks [6,109,114,123,124]. For security reason, fingerprint biometric systems don't store fingerprint features as they are but save them as template which is considered relatively safe.…”
Section: Open Challenges and Prospect For Future Researchmentioning
confidence: 99%
“…This problem is particularly critic in security-related domains leveraging CNNs in one of their stages. Focusing on the case of fingerprints, the vulnerability of FPAD systems to adversarial perturbations has already been demonstrated [5], [6], describing methods designed to generate perturbed fingerprint images able to mislead a target FPAD in the digital domain. In both cases, the attacks assume that the attacker can feed the perturbed digital image directly into the CNN.…”
Section: Cnn-based Fpad and Adversarial Perturbationsmentioning
confidence: 99%
“…Adversarial perturbations are thus potentially very dangerous since an attacker could intentionally add small perturbations to an image, keeping it visually unchanged, to force the classifier's decision. On this line, some recent works already demonstrated the effectiveness of a digital attack on a CNN FPAD [5], [6]. This type of attack assumes the attackers to be able to enter the communication channel between the sensor and the neural network, making them able to submit the adversarial images as input to the FPAD.…”
Section: Introductionmentioning
confidence: 99%