2009 Fifth International Conference on Image and Graphics 2009
DOI: 10.1109/icig.2009.120
|View full text |Cite
|
Sign up to set email alerts
|

Audio-Visual Emotion Recognition Based on a DBN Model with Constrained Asynchrony

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2011
2011
2014
2014

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 8 publications
(12 citation statements)
references
References 19 publications
0
12
0
Order By: Relevance
“…7 shows an example of the tracking results. Experimental results demonstrate that the BTSM based tracking system can reliably and precisely track the face shape through long sequences [7].…”
Section: B Feature Extractionmentioning
confidence: 99%
See 1 more Smart Citation
“…7 shows an example of the tracking results. Experimental results demonstrate that the BTSM based tracking system can reliably and precisely track the face shape through long sequences [7].…”
Section: B Feature Extractionmentioning
confidence: 99%
“…Classical DBN model is shown as Fig.4 [7], which can be used for speech recognition and speaker recognition.…”
Section: B Traditional Dbn Modelmentioning
confidence: 99%
“…In [13], a two stream state asynchronous DBN model (Asy_DBN) has been proposed. In this paper, to combine the MFCC features, local prosodic features and visual emotion features more reasonably, we extend the Asy_DBN model to a triple stream audio visual asynchronous DBN (T_AsyDBN) model, as shown in Fig.…”
Section: Triple Stream Asynchronous Dbn (T_asydbn) Modelmentioning
confidence: 99%
“…Fig.1 In [13], an 8-dimensional feature vector, describing the movements of eyes and eyebrows in the upper face, has been used. For the sake of completeness, these features are summarized in Table 1, where point 84 has been defined as the midpoint between landmarks 41 and 43, representing the nose apex.…”
Section: D Facial Featuresmentioning
confidence: 99%
See 1 more Smart Citation