The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2015
DOI: 10.1016/j.patcog.2015.04.025
|View full text |Cite
|
Sign up to set email alerts
|

A spatial-temporal framework based on histogram of gradients and optical flow for facial expression recognition in video sequences

Abstract: Original citation:Fan, Xijian and Tjahjadi, Tardi. (2015) A spatial-temporal framework based on histogram of gradients and optical flow for facial expression recognition in video sequences. Pattern Recognition, 48 (11 Copies of full items can be used for personal research or study, educational, or not-forprofit purposes without prior permission or charge. Provided that the authors, title and full bibliographic details are credited, a hyperlink and/or URL is given for the original metadata page and the content… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
31
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 75 publications
(31 citation statements)
references
References 30 publications
0
31
0
Order By: Relevance
“…The tables show the framework using the simple fusion strategy of two features performs better than using individual feature separately, and the proposed fusion strategy achieves the best performance. In Table 6, we compare the proposed feature with the method of Eskil et al [43], the static method of Lucey et al [32] and our previous work [44], which shows the fused feature achieves an average recognition rate of 88.30% for all seven facial expressions, and outperforms the other methods. Thus, we can also conclude that the combination of two dynamic features improves the recognition rate.…”
Section: Resultsmentioning
confidence: 96%
See 2 more Smart Citations
“…The tables show the framework using the simple fusion strategy of two features performs better than using individual feature separately, and the proposed fusion strategy achieves the best performance. In Table 6, we compare the proposed feature with the method of Eskil et al [43], the static method of Lucey et al [32] and our previous work [44], which shows the fused feature achieves an average recognition rate of 88.30% for all seven facial expressions, and outperforms the other methods. Thus, we can also conclude that the combination of two dynamic features improves the recognition rate.…”
Section: Resultsmentioning
confidence: 96%
“…Thus, we can also conclude that the combination of two dynamic features improves the recognition rate. We also conducted an experiment on the MMI dataset, comparing the proposed framework with the method that uses LBP and SVM [37], and the methods in [45] and [44] that are evaluated using the same classification strategy of 10-fold cross-validation. The average recognition rates are shown in Table 7.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In [19], Xijian and Tjahjadi extracted spatial pyramid histogram of gradients to three-dimensional facial features. They captured both spatial and motion information of facial expression by integrated the extracted features with dense optical flow.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Works that exploit video data focus on the importance of the temporal evolution of the input face. The system proposed by Fan and Tjahjadi [3] processes four sub-regions of the face: forehead, eyes/eyebrows, nose and mouth. They used an extension of the spatial pyramid histogram of gradients and dense optical flow to extract spatial and dynamic features from video sequences, and adopted a multi-class SVM-based classifier with one-to-one strategy to recognise facial expressions.…”
Section: Introductionmentioning
confidence: 99%