2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings.
DOI: 10.1109/cvpr.2003.1211344
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian tangent shape model: estimating shape and pose parameters via Bayesian inference

Abstract: 12In this paper we study the

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
76
0

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 81 publications
(76 citation statements)
references
References 15 publications
0
76
0
Order By: Relevance
“…The contour of the lips is obtained through the Bayesian Tangent Shape Model (BTSM) [Zhou 2003]. Fig.…”
Section: D Lip Contour Extractionmentioning
confidence: 99%
“…The contour of the lips is obtained through the Bayesian Tangent Shape Model (BTSM) [Zhou 2003]. Fig.…”
Section: D Lip Contour Extractionmentioning
confidence: 99%
“…ASM utilized the local texture information in search of a better template, and AAM constructed appearance models according to shape parameters and global texture constraints. After ASM and AAM, Zhou et al [3] introduced Bayesian Tangent Shape Model (BTSM) with an EM-based method to implement the MAP estimation, Liang et al [4] utilized a Constrained Markov Network for accurate face alignment, and Boosting Appearance Models (BAM) [5] presented a discriminative method with boosting algorithm and rectangular Haar-like features, which resulted in outstanding accuracy and robustness. Enlightened by BAM, the speed of face alignment can be improved by more discriminative features and boosting classifiers bring in the benefit of computational efficiency.…”
Section: Introductionmentioning
confidence: 99%
“…Active Shape Models (ASMs) [5] have undergone several changes since the classical ASM was proposed in [6] and have recently become more robust [14,20,32] and hence more popular for accomplishing this task as they are not only faster, but also more accurate when dealing with unseen variations that are not present in training images.…”
Section: Introductionmentioning
confidence: 99%