2018
DOI: 10.1016/j.patcog.2017.10.020
|View full text |Cite
|
Sign up to set email alerts
|

From one to many: Pose-Aware Metric Learning for single-sample face recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 20 publications
0
10
0
Order By: Relevance
“…Moreover, PCANet is also a reference for reviewing advanced deep learning architectures containing a large number of image classifications. Also, Deng, Hu, Wu, and Guo [14] put forward the creation of a face image to mitigate varying illumination and pose, respectively, using only one frontal face image to develop an extended generic elastic model (GEM) and a multidepth model. Pose-aware metric learning (PAML) was learned by means of linear regression to synthesize each pose in their corresponding metric space, and it yielded an accuracy of 100%.…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, PCANet is also a reference for reviewing advanced deep learning architectures containing a large number of image classifications. Also, Deng, Hu, Wu, and Guo [14] put forward the creation of a face image to mitigate varying illumination and pose, respectively, using only one frontal face image to develop an extended generic elastic model (GEM) and a multidepth model. Pose-aware metric learning (PAML) was learned by means of linear regression to synthesize each pose in their corresponding metric space, and it yielded an accuracy of 100%.…”
Section: Related Workmentioning
confidence: 99%
“…When the multi-perspective light field is scheduled, [33][34][35][36] the important focus is how to sure maneuvering target just as the same target. This part uses the proposed algorithm to identify feature point matching targets for maneuvering targets entering the monitoring area.…”
Section: Maneuvering Target Recognitionmentioning
confidence: 99%
“…In [19], given single training images and exploiting a sufficiently rich bootstrap set, the method produces the corresponding 3D face rendering, being potentially able to synthesize images with any pose or illumination variations. This is achieved coupling a multi-depth 3D generic elastic model with the quotient image technique, aiming at synthesizing virtual faces with a desired illumination and expression, given a frontal image.…”
Section: Related Workmentioning
confidence: 99%