2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2015
DOI: 10.1109/cvprw.2015.7301308
|View full text |Cite
|
Sign up to set email alerts
|

NIR-VIS heterogeneous face recognition via cross-spectral joint dictionary learning and reconstruction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
67
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 121 publications
(68 citation statements)
references
References 28 publications
0
67
1
Order By: Relevance
“…Besides, the proposed methods are also compared with three other methods [14], [15], [46] that are not based on subspace or metric learning. The method proposed in [46] is image synthesis-based.…”
Section: F Results On Casia Nir-vis 20 Datasetmentioning
confidence: 99%
See 2 more Smart Citations
“…Besides, the proposed methods are also compared with three other methods [14], [15], [46] that are not based on subspace or metric learning. The method proposed in [46] is image synthesis-based.…”
Section: F Results On Casia Nir-vis 20 Datasetmentioning
confidence: 99%
“…Despite of this, as the proposed methods are general metric learning methods, they can be combined with image synthesis-based methods and feature learning methods to take advantages of both. Due to that three methods [14], [15], [46] are not open sources, in the next subsection, we tested the proposed methods when combined with two publicly available deep features. On the CASIA NIR-VIS 2.0 dataset, KMCM 2 L achieved a significant improvement on the rank-1 result of 96.5 ± 0.4% (see Section VII-G and Table VI for details).…”
Section: F Results On Casia Nir-vis 20 Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…Their efficacy here shows that CNNs trained using big data learn a sufficiently robust face representation which bridges the gap between VIS and NIR. CNN architectures + big VIS training images greatly outperforms hand-crafted features + explicit cross modality learning models [25,13,53], suggesting that explicit crossmodal learning might be unnecessary for VIS-NIR. Comparing with the CNNs, LeanFace works better than Light CNN because it uses (1) larger training data (6M vs 5M) (2) better loss functions (softmax + centerloss vs softmax) and (3) deeper architectures.…”
Section: Casia Nir-vis 20 Databasementioning
confidence: 99%
“…Acc. (%) C-CBFD+LDA [25] 81.8 Dictionary Learning [13] 78.46 Gabor+RBM [53] 86.16 Light CNN [49] 91.88 LeanFace 97.27 AttNet 2.38 GTNN (LeanFace, AttNet) 99.94…”
Section: Casia Nir-vis 20 Databasementioning
confidence: 99%