2012 19th IEEE International Conference on Image Processing 2012
DOI: 10.1109/icip.2012.6467145
|View full text |Cite
|
Sign up to set email alerts
|

Probabilistic fusion of regional scores in 3D face recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…8 Different types of expressions gathered for subject 04514 and their associated texture and 3D images [69] proposed to firstly search for the nasal area in the center of the image, and then extract the outline of the diagonal area of the nasal area as a feature. Erdogmus et al [77] proposed another local feature based method. They divided the face into several parts, and then calculated the similarity of the corresponding parts between two 3D face images.…”
Section: Research On Expression-invariant 3d Face Recognitionmentioning
confidence: 99%
“…8 Different types of expressions gathered for subject 04514 and their associated texture and 3D images [69] proposed to firstly search for the nasal area in the center of the image, and then extract the outline of the diagonal area of the nasal area as a feature. Erdogmus et al [77] proposed another local feature based method. They divided the face into several parts, and then calculated the similarity of the corresponding parts between two 3D face images.…”
Section: Research On Expression-invariant 3d Face Recognitionmentioning
confidence: 99%
“…Considerable research efforts have been devoted to handle expression problems by treating a human face as a rigid subject, because some facial regions always remain unchanged even under expression variations (Erdogmus et al, 2012;Bornak et al, 2010;Miao and Krim, 2011). Other approaches employ the deformation algorithms to recover the expression distortions by extracting local features (Bowyer et al, 2006;Samir et al, 2006;Li et al, 2011;Smeets et al, 2013; employ the ICP algorithm to match the probe image and the gallery image, they chose 28 regions from 38 facial areas as the best combination to achieve maximum matching performance.…”
Section: Introductionmentioning
confidence: 99%