2009 IEEE 12th International Conference on Computer Vision 2009
DOI: 10.1109/iccv.2009.5459283
|View full text |Cite
|
Sign up to set email alerts
|

Robust facial feature tracking using selected multi-resolution linear predictors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
42
0

Year Published

2010
2010
2018
2018

Publication Types

Select...
4
2
1

Relationship

5
2

Authors

Journals

citations
Cited by 31 publications
(42 citation statements)
references
References 12 publications
0
42
0
Order By: Relevance
“…These approaches are more likely to be applicable to sign expressions, as they will have fewer constraints, having been trained on more natural data sets. An example of this is the work by Sheerman-Chase et al who combined static and dynamic features from tracked facial features (based on Ong's facial feature tracker [78]) to recognise more abstract facial expressions, such as 'Understanding' or 'Thinking' [89]. They note that their more complex dataset, while labelled, is still ambiguous in places due to the disagreement between human annotators.…”
Section: Non-manual Featuresmentioning
confidence: 99%
“…These approaches are more likely to be applicable to sign expressions, as they will have fewer constraints, having been trained on more natural data sets. An example of this is the work by Sheerman-Chase et al who combined static and dynamic features from tracked facial features (based on Ong's facial feature tracker [78]) to recognise more abstract facial expressions, such as 'Understanding' or 'Thinking' [89]. They note that their more complex dataset, while labelled, is still ambiguous in places due to the disagreement between human annotators.…”
Section: Non-manual Featuresmentioning
confidence: 99%
“…csamandhilda, that are consistent across a view, which indicates improved robustness to viewpoint updated for all the types of the primitive feature . The goal of this experiment is to obtain the best viewing angle for computing lip-reading and active appearance model (AAM) features that are extracted from each view, respectively by using their second derivatives .They use a linear predictor based on the tracking and it's has more robust lip-contour than the AAM that was introduced by [9].The audio-visual speech recognition system, visual features obtained from Discrete Cosine Transform( DCT) and active appearance model. ( AAM) were projected onto a 41 dimensional feature space using the LDA, proposed by [34] The systems reduce the dimensionality for Linear Discriminant Analysis (LDA) or Fisher's Linear Discriminant (FLD) as introduced by [57].…”
Section: Related Workmentioning
confidence: 99%
“…First, they used primitive way by combining the feature vectors (CFV), which called a cat and the second is concatenating the features and reduce the dimensionality using PCA, proposed by [14] which used (csam) feature. CFV improved by using an LDA over window for the set of frames .It is proposed by [15] which went to represented (hilda) features in the frontal lip-reading, It has applied for two features are the discriminating , it is introduced by [9] For all features, a z-score normalization is used, which has been visible to develop the separability between the features of the classes by [31] . The best viewing angle for the primitive features, i.e., those that aren't relative to a third PCA or an LDA i.e.…”
Section: Expire Vector Featurementioning
confidence: 99%
“…Tracking was performed by linear predictor tracking [6]. Because the tracker requires multiple frames to be annotated for training, κ = 48 points {T i } i∈ [1..κ] that could be consistently located were selected for use and manually marked (see Figure 2).…”
Section: Feature Extraction and Feature Selection A Tracking Andmentioning
confidence: 99%
“…The system uses linear predictor tracking [6] to track a selected set of facial locations, and makes use of geometric relations between points to encode facial shape information. Feature selection is then used to select the subset of feature components that are relevant to a specific NVC.…”
Section: Introductionmentioning
confidence: 99%