2019 International Conference on Biometrics (ICB) 2019
DOI: 10.1109/icb45273.2019.8987306
|View full text |Cite
|
Sign up to set email alerts
|

NIR-to-VIS Face Recognition via Embedding Relations and Coordinates of the Pairwise Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 10 publications
(15 citation statements)
references
References 15 publications
0
15
0
Order By: Relevance
“…We compared the our method with other deep learning methods, including TRIVET [19], IDR [20], ADFL [4], CDL [21], WCNN [7], RM [13], and RGM [15]. In Table 2, our PRAM Models CASIA NIR-VIS 2.0 [1] Rank-1 Acc.…”
Section: Comparison With Deep Learning Methodsmentioning
confidence: 99%
“…We compared the our method with other deep learning methods, including TRIVET [19], IDR [20], ADFL [4], CDL [21], WCNN [7], RM [13], and RGM [15]. In Table 2, our PRAM Models CASIA NIR-VIS 2.0 [1] Rank-1 Acc.…”
Section: Comparison With Deep Learning Methodsmentioning
confidence: 99%
“…We compared the our method with other deep learning methods, including TRIVET [19], IDR [20], ADFL [4], CDL [21], WCNN [7], RM [13], and RGM [16]. In Table 2, our PRAM performed better than the RM, which pairwise concatenated the feature vector with the addition of conditional triplet loss (L C ).…”
Section: Comparison With Deep Learning Methodsmentioning
confidence: 99%
“…In Table 3, our approach exhibits the best performance Models CASIA NIR-VIS 2.0 [1] Rank-1 Acc. (%) VR@FAR=0.1%(%) TRIVET [19] 95.7 78 IDR [20] 97.33 95.73 ADFL [4] 98.15 97.21 CDL [21] 98.62 98.32 WCNN [7] 98.7 98.4 RM [13] 94.73 94.31 RGM [16] 97 TRIVET [19] 93.9 80.9 IDR [20] 94.3 84.7 ADFL [4] 95.2 95.3 CDL [21] 96.9 95.9 WCNN [7] 97.4 96 RGM [16] 97. on the BUAA-VisNir database with a large variance in emotion and pose. Compared to the WCNN and ADFL, our method is slightly lower in the CASIA NIR-VIS 2.0 database but still demonstrate competitive performance, and higher performance with 2.04% and 4.24% in Buaa database.…”
Section: Comparison With Deep Learning Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The feature match- ing is conducted on the latent subspace. Cho et al [3] presented a post-processing relation module to capture relations and coordinates of the pairwise feature to reduce the domain discrepancy. In addition, a triplet loss with a conditional margin is introduced to reduce intra-class variation in training.…”
Section: Related Workmentioning
confidence: 99%