2008
DOI: 10.1109/tmm.2008.921737
|View full text |Cite
|
Sign up to set email alerts
|

Audio–Visual Affective Expression Recognition Through Multistream Fused HMM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
52
0

Year Published

2010
2010
2018
2018

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 116 publications
(53 citation statements)
references
References 29 publications
1
52
0
Order By: Relevance
“…The affect recognition studies [2,3] have found that many studies have employed supervised learning techniques for emotion recognition. Researchers have successfully used classification methods such as Bayesian classifiers [4], Hidden Markov Models (HMM) [15], and SVM [16,17] for affect recognition. Results in [6,10,11] and the classification techniques discussed in multimodal research surveys [32] showed that the performance improved in affect recognition by using SVM classifiers.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The affect recognition studies [2,3] have found that many studies have employed supervised learning techniques for emotion recognition. Researchers have successfully used classification methods such as Bayesian classifiers [4], Hidden Markov Models (HMM) [15], and SVM [16,17] for affect recognition. Results in [6,10,11] and the classification techniques discussed in multimodal research surveys [32] showed that the performance improved in affect recognition by using SVM classifiers.…”
Section: Introductionmentioning
confidence: 99%
“…Researchers [11,15,36] have used facial and audio data fusion at the feature level for affect recognition. Hence, this paper uses concatenation of features from face, hand, head, body, audio, and the behavioral rule-based features to form a joint feature vector for multimodal emotion recognition system implementation.…”
Section: Introductionmentioning
confidence: 99%
“…Different cues of facial expression, speech, body gestures and context are used to recognize human affective states. More and more efforts are being made on multimodal fusion because integrating the information from multiple channels may lead to an improved performance of affect recognition [4]- [7].…”
Section: Introductionmentioning
confidence: 99%
“…And a Boltzmann chain can implement the same classification as the corresponding HMM with the same trellis topology. Unlike the MFHMM ( [4]) which fuses component HMMs together by connecting the hidden states of one HMM to the observation of another HMM, our method interconnects the hidden states of two Boltzmann chains with a correlation matrix. This topology is more natural for multimodal affect recognition.…”
Section: Introductionmentioning
confidence: 99%
“…In the end, the ranked classification results produced from each cluster are fused at the decision level. Other works in the literature have also applied hybrid fusion approaches that are different from ours [128,129,192,222,224]. Islam et al [223] propose a three-phase fusion process toward audio and visual modalities: fusion within a single modality, fusion across modalities in the feature level, and fusion on the decision level according to the reliability of each modality.…”
Section: At Which Level Should the Fusion Be Performed?mentioning
confidence: 99%