2012
DOI: 10.5120/7537-474
|View full text |Cite
|
Sign up to set email alerts
|

A Supervised Hybrid Methodology for Pose and Illumination Invariant 3D Face Recognition

Abstract: The 2D face recognition systems encounter difficulties in recognizing faces with illumination variations. The depth map of the 3D face data has the potential to handle the variation in illumination of face images. The view variations are handled by using the moment invariants. Moment Invariants are used as rotation invariant features of the face image. For feature matching an efficient fuzzy-neural technique is proposed. The PCA components of normalized depth map and Moment invariants on mesh images are used s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2018
2018
2018
2018

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 9 publications
0
3
0
Order By: Relevance
“…The proposed LGP-WHGO method was compared with the state-of-art facial recognition methods in two databases. The method in [2] is the global feature-based method, the method in [13] is the local feature-based method, and the methods in [15][16][17][18][19] are multimodal hybrid methods. Tables 7 and 8 are RR comparisons with state-of-art methods on the CASIA and FRGC v2.0 databases, respectively.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The proposed LGP-WHGO method was compared with the state-of-art facial recognition methods in two databases. The method in [2] is the global feature-based method, the method in [13] is the local feature-based method, and the methods in [15][16][17][18][19] are multimodal hybrid methods. Tables 7 and 8 are RR comparisons with state-of-art methods on the CASIA and FRGC v2.0 databases, respectively.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…The Bosphorus experimental dataset contains not only yaw rotation face images (yaw 10, 20, 30, and 45 degrees), but also occlusion face images (eye, mouth, eyeglasses, hair occlusion)-with that dataset, the RR reached 96.75%. Methods in [2,17] used the trained advantages of neural networks to extract features and classification. The method in [18] used different combinations of features, including MSLBP, SLF, Gabor wavelets, and SIFT for classification.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
See 1 more Smart Citation