2021
DOI: 10.14569/ijacsa.2021.0120436
|View full text |Cite
|
Sign up to set email alerts
|

PlexNet: An Ensemble of Deep Neural Networks for Biometric Template Protection

Abstract: The security of biometric systems, especially protecting the templates stored in the gallery database, is a primary concern for researchers. This paper presents a novel framework using an ensemble of deep neural networks to protect biometric features stored as a template. The proposed ensemble chooses two state-of-the-art CNN architectures i.e., ResNet and DenseNet as base models for training. While training, the pre-trained weights enable the learning algorithm to converge faster. The weights obtained through… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(11 citation statements)
references
References 43 publications
(64 reference statements)
0
11
0
Order By: Relevance
“…Caltech Faces 1999 • Georgia Tech • 10k US Adult Faces [31] -MOBIO NN-learned [37], [38] CMU PIE • Extended Yale Face Database B • CMU Multi-PIE [39] CMU PIE • Extended Yale Face Database B [40] CMU PIE • FEI • Color FERET [41] CMU PIE • FEI • Color FERET [42] CMU PIE • FEI • Color FERET [43] MS-Celeb-1M • CASIA-WebFace • AT&T / ORL • Own CASIA-WebFace • AT&T / ORL • Own [44] LFW [45] CMU PIE • Extended Yale Face Database B • CMU Multi-PIE • WVU Multimodal [46] VGGFace2 • MegaFace [47] YouTube Faces • FaceScrub [48] VGGFace2…”
Section: Methods Type Referencementioning
confidence: 99%
See 4 more Smart Citations
“…Caltech Faces 1999 • Georgia Tech • 10k US Adult Faces [31] -MOBIO NN-learned [37], [38] CMU PIE • Extended Yale Face Database B • CMU Multi-PIE [39] CMU PIE • Extended Yale Face Database B [40] CMU PIE • FEI • Color FERET [41] CMU PIE • FEI • Color FERET [42] CMU PIE • FEI • Color FERET [43] MS-Celeb-1M • CASIA-WebFace • AT&T / ORL • Own CASIA-WebFace • AT&T / ORL • Own [44] LFW [45] CMU PIE • Extended Yale Face Database B • CMU Multi-PIE • WVU Multimodal [46] VGGFace2 • MegaFace [47] YouTube Faces • FaceScrub [48] VGGFace2…”
Section: Methods Type Referencementioning
confidence: 99%
“…[12] Rank-1 IR (%) DET (FNIR vs. FPIR) (%) FNIR @ FPIR = {0, 0.1, 1} (%) [13] ROC (FRR vs. FAR, plus EER) (%) [14] TAR @ FAR = {1, 0.1, 0.01, 0.001} (%) ROC (TAR vs. FAR) [15] [16] {TP, FP, TN, FN} @ FPR = 0.001 [17] FNMR and FMR (%) ≈ [18] EER DET (Miss vs. False Alarm prob.) (%) ≈ [19] Accuracy (%) [20] EER (%) [21] Recognition rate (%) [22] Accuracy (%) DET (FRR vs. FAR, plus EER) (%) [23] AUC (%), EER (%), TPR (%), TAR @ FAR = 0.1 (%), Rank-1 DIR @ FAR = 1 (%) [24] EER (%), GAR @ FAR (%) [25] FAR and FRR (%) [26] EER (%) [27] EER ROC (1 -FNMR vs. FMR) [28] TAR @ FAR (%) [29] IR (%) @ Rank = {1, 10, 50}, ROC (VR vs. FAR) (%) ≈ Accuracy (%), Verification Recognition @ FAR = 0.1 (%), Rank-1 DIR @ FAR = 1 (%), Own metrics: TIR, MIR, FIR (%) [30] Accuracy, FAR, FRR, EER (%) [31] 1 -FNMR (TMR) @ FMR = 10 −3 (or 0.1%) ROC (1 -FNMR vs. FMR) NN-learned [37], [38] EER (%), GAR @ FAR = {0, 1} (%) [39] EER (%), GAR @ FAR = 1 (%) ROC (GAR vs. FAR) [40] EER (%), GAR @ FAR = {0, 0.01, 0.1} (%) ROC (GAR vs. FAR) [41] EER (%), GAR @ FAR = 0 (%) [42] EER (%), GAR @ FAR (%) [43] FAR and FRR (%) [44] EER [45] EER (%), GAR @ FAR = 0.01 (%) ROC (GAR vs. FAR) (%) [46] Accuracy (%), GAR @ FAR = 0 (%) ROC (TAR vs. FAR) [47] EER (%), GAR @ FAR = 0.1 (%), Maximum Average Precision [48] GAR @ FAR = 0.1 (%) [49] EER (%), FNMR @ FMR = 0.1 (%) DET (FNMR vs. FMR) [50] EER (%) [51] EER (%) ROC (TPR vs. FPR) of the "other" systems (i.e., those using unprotected templates or templates protected by other BTP methods) were deliber-ately/directly compared (e.g., in a table, on the same plot, or conceptually). On the other hand, an implicit/pa...…”
Section: Methods Type Referencementioning
confidence: 99%
See 3 more Smart Citations